# Will there be dramatic changes on the day of the Singularity?

The birth of ChatGPT is undoubtedly a technical breakthrough. Everyone surprised with its remarkably versatile responses across various domains. Some have even say, “The birth of ChatGPT marks the arrival of the singularity,” but in reality, the world hasn’t changed dramatically overnight.

Many of those following X may also feel that things haven’t changed dramatically, despite ChatGPT’s introduction.

First, let’s delve into ChatGPT and other Large Language Models (LLMs). From my understanding, Neural Networks (particularly the Transformer used in LLMs) exhibit a time complexity of at most O(N) to O(N^2) for a single inference, depending on the length N of the input text.

For example, solving NP-complete problems like the Traveling Salesman Problem (TSP) in polynomial time using Neural Networks seems unlikely (as solving it would imply P≠NP being solved as P=NP). Problems in the real world have inherent difficulties, much like having to physically traverse a certain distance (without teleportation) when traveling between two distant points. Solving problems of varying complexity requires a proportional amount of space (memory), time, and energy.

Diffusion Model is a trending model in the field of image generation. In this model, one needs to perform inference with the same model for a fixed number of diffusion steps (often in the thousands), regardless of the problem’s complexity.

Related to this, in the field of quantum computing, there exists something called the adiabatic theorem. This theorem states that the time required to transition accurately from an initial state A to the answer state B depends on the difficulty of the problem (in quantum adiabatic computation, problems are defined by the time evolution of a Hamiltonian H). Simple problems can be solved in polynomial time, whereas challenging problems necessitate exponential amounts of time.

In essence, solving difficult problems takes time proportional to their complexity. This is analogous to increasing the diffusion steps in the Diffusion Model (in this sense, many image generation problems fall under “easy” problems that can be solved in polynomial time).

Given these considerations, it’s unlikely that AGI will lead to an “exponential” leap (solving problems that can’t be solved in polynomial time at high speed), even if it emerges. However, constant-factor speedups are possible, and if it’s a 10x improvement, it would mean compressing 100 years of technological progress into 10 years, which would still be a significant acceleration.

Currently, chess AI has surpassed human professional players. However, it’s not completely incomprehensible, and most moves can be understood upon later analysis. I believe that AI will be used in various domains in a similar way, where its decision-making processes become more transparent and interpretable with time.