Artificial Intelligence (AI) has captured the imagination of researchers, technologists, and the general public alike. As we stand on the cusp of potentially groundbreaking advancements, the debate surrounding the future of AI is more polarized than ever. Some researchers believe we will achieve Artificial General Intelligence (AGI) by 2028, while others are skeptical, arguing that we might soon hit a plateau. This discussion is part of the broader AI hype cycle, a framework created by Gartner to plot the excitement, expectations, and eventual disillusionment of emerging technologies.
In this blog, we will explore the current state of AI, the hype surrounding AGI, the trends in AI hardware, and the challenges that lie ahead. We’ll also delve into whether AI is truly on an exponential growth trajectory or if we’re approaching a slowdown.
The AI Hype Cycle: Understanding the Phases
Innovation Trigger
The AI hype cycle begins with the Innovation Trigger, a phase where new technologies emerge and initial excitement builds. This is where groundbreaking ideas like Quantum AI or AGI find themselves today. At this stage, the technology is in its infancy, and its potential is primarily recognized by early adopters and researchers. For example, Embodied AI, which gives AI language models a physical form, like the Figure O2 robot, is currently in this phase. The “wow effect” is easy to generate during this period, as the possibilities seem endless.
Peak of Inflated Expectations
As more people become aware of the technology, it moves into the Peak of Inflated Expectations. Here, the media and industry hype reaches its zenith, and substantial investments pour in. This is the current state of Foundational Models, where companies like Google, Meta, Anthropic, and OpenAI are leading the charge. These models, which form the backbone of many AI applications, are at the center of attention, driving advancements in natural language processing, computer vision, and more.
The Valley of Disappointment
However, what often follows the peak is the Valley of Disappointment. Despite the initial excitement and investment, many technologies fail to meet the lofty expectations set during the peak. This is where skepticism sets in, and the limitations of the technology become apparent. Today, some argue that AI, particularly Generative AI, is entering this phase. The improvements from GPT-3 to GPT-4 were significant, but the leap to GPT-5 may not evoke the same level of excitement. This plateau in AI advancements has led some to question whether we are truly on an exponential curve or if we are approaching the limits of what current AI technologies can achieve.
The State of AI: Are We on an Exponential Curve?
Moore’s Law and AI’s Reality Check
To understand the current state of AI, it’s important to revisit Moore’s Law—the observation that the number of transistors on a silicon chip doubles roughly every two years, leading to exponential growth in computing power. This law has driven much of the progress in technology over the past few decades. However, when we apply this concept to AI, the reality is more complex.
While AI technologies have made significant strides, the improvements are not always proportional to the increase in computing power. For example, the jump from GPT-3 to GPT-4 was substantial, but as we pour more resources into AI, the returns are becoming marginal. This phenomenon is evident when we compare performance gains to the computing power required to achieve them. Despite exponential increases in computing power, the performance improvements are often linear, suggesting that we may be hitting a plateau.
The Plateau of AI Progress
The notion of a plateau in AI progress is not new. Many in the industry have observed that while AI models are becoming more sophisticated, the gains are diminishing. This is particularly true when we consider the computational demands of training these models. Industry estimates suggest that the computing power required for AI tasks will increase 100 to 1,000 times over the next five years. While this has led to the rise of AI chip startups like Groq and Cerebras, which are raising significant investment rounds, there is a growing concern that the returns on these investments may not be as high as anticipated.
The Battle for Efficiency: AI Hardware Trends
General vs. Specialized AI Accelerators
One of the most significant trends in AI hardware is the battle between general-purpose AI accelerators and specialized AI chips. On one side, there is a strong push to build AI accelerators that are as general as possible, capable of handling a wide range of tasks. On the other side, there is an equally strong drive towards efficiency, leading to the development of Application-Specific Integrated Circuits (ASICs).
ASICs are designed with a specific purpose in mind, dedicating most of the silicon area to hardcoded operations. These chips are highly efficient for their intended tasks but are limited in their flexibility. This trend towards specialization poses a significant risk: if a new AI algorithm emerges that requires different operations, ASICs may become obsolete. This was the fate of startups like Graphcore, which went all in on convolutional neural networks, only to be acquired after missing the wave of transformer-based models.
The Cost of AI Progress
Another major challenge in AI development is the cost. Over the past few years, the industry has shifted from million-dollar computing clusters to billion-dollar ones. The costs of compute, energy, and maintenance have skyrocketed, leading some to question whether the current trajectory is sustainable. While AI has the potential to address global challenges, such as healthcare and drug discovery, the financial and environmental costs of AI development are becoming increasingly difficult to justify.
The Valley of Disappointment: Generative AI’s Struggles
The Dunning-Kruger Effect in AI
As AI technologies, particularly Generative AI, move through the Valley of Disappointment, we are witnessing a shift in perception. Initially, generative models like those used to create text, images, and videos were hailed as revolutionary. However, as the technology matures, the excitement has waned. Researchers from MIT estimate that only 5% of tasks will be significantly affected by generative AI, and productivity gains may be as low as 0.5%.
This disillusionment is reminiscent of the Dunning-Kruger Effect, where individuals overestimate their understanding of a topic early on, only to realize the complexity and limitations as they gain more experience. In the context of AI, early hype has given way to a more measured understanding of the technology’s capabilities and limitations.
The Myth of AI Scaling Laws
Another misconception that has fueled the AI hype is the belief in AI Scaling Laws—the idea that by doubling the amount of data and computing power, we can double the capabilities of AI models. While there is some truth to this, the relationship between data, compute, and performance is not as straightforward as it seems. For example, feeding twice as much data into a model like ChatGPT may improve its output, but it does not necessarily double its capabilities.
Moreover, the costs associated with scaling AI models are becoming prohibitive. As we move from million-dollar to billion-dollar computing clusters, the marginal gains in performance may not justify the exponential increase in costs. This has led some in the industry to advocate for a more focused approach, concentrating on AI applications that address global challenges, such as healthcare and scientific research, rather than pursuing ever-larger models with diminishing returns.
What’s Next: The Path to the Plateau of Productivity
Real-World Applications of AI
Despite the challenges and disillusionment, AI continues to make significant strides in certain areas. As the hype cycle progresses, AI technologies are likely to move towards the Plateau of Productivity—a phase where the technology becomes stable, reliable, and widely adopted. This is already happening with applications like computer vision and cloud computing, where AI has demonstrated real-world benefits and a significant impact on business.
One of the key areas where AI holds enormous potential is in healthcare and drug discovery. Models like DeepMind’s AlphaFold have already revolutionized our understanding of protein folding, opening new avenues for drug development and personalized medicine. Similarly, AI has the potential to address some of the most pressing challenges in climate science, energy management, and space exploration.
The Need for Breakthroughs
For AI to continue its upward trajectory, the industry needs another one or two major breakthroughs. While current technologies like Generative AI and Foundational Models have laid a solid foundation, they are not enough to sustain exponential growth. The next wave of AI advancements will likely come from new algorithms, architectures, or applications that fundamentally change the way we approach AI.
This is a sentiment echoed by AI luminaries like Yann LeCun, who has warned against focusing too much on large language models (LLMs) at the expense of other promising areas of AI research. The future of AI may lie in fields that are currently underexplored, such as neuromorphic computing, quantum AI, or bio-inspired AI.
Conclusion: Navigating the AI Landscape
The journey of AI from its inception to its current state has been marked by periods of intense excitement, overblown expectations, and eventual disillusionment. As we stand at the crossroads of what could be the next major phase in AI development, it is crucial to approach the technology with a balanced perspective.
While some researchers remain optimistic about achieving AGI by 2028, others caution against placing too much faith in the current trajectory of AI progress. The reality is that AI, like any other technology, is subject to the laws of diminishing returns, and the road to AGI is fraught with challenges.
At the same time, the potential of AI to transform industries and solve global problems cannot be understated. As we move through the AI hype cycle, the focus should shift from chasing the next big breakthrough to applying AI in ways that generate real value for society.
As we navigate this complex landscape, it is essential to remain informed, critical, and open to new ideas. The AI hype cycle is not just a chart—it’s a reflection of our collective hopes, fears, and aspirations for the future of technology. Whether AI reaches its full potential or hits a plateau, one thing is certain: the journey is far from over.
Let us know your thoughts in the comments below. Do you believe we are on the verge of achieving AGI, or are we headed for a plateau? Share this blog with your friends and colleagues, and let’s continue the conversation about the future of AI.