Why AGI is close yet so Far Away

Rohit Raj
4 min readDec 6, 2024

--

I have long been fascinated by computer chess programs. Today’s top engines — Stockfish and Leela Chess Zero — are marvels of machine learning and computational efficiency. They evaluate positions with astonishing depth, often looking more than forty moves ahead. By contrast, even world-class grandmasters struggle to consistently see more than seven or eight moves deep, and the average chess enthusiast can barely manage two or three moves before losing track. Yet despite their near-“godlike” skill in this narrow domain, these engines still make mistakes against each other. Even with extraordinary processing power and perfected algorithms, the complexity of chess ensures occasional inaccuracies.

What makes chess noteworthy is its tight constraints. Chess is a closed system: it has strictly defined rules, limited possible moves, and a binary outcome — win, lose, or draw. This bounded nature allows programs to train on billions of self-play games, and from these experiences directly map sequences of moves to final results.

But real life is unimaginably more complex. Instead of a neatly defined set of moves, there are countless possibilities unfolding at every moment. Actions in life seldom have universally agreed-upon evaluation metrics. A choice that seems beneficial in one context might be harmful in another; a short-term gain could produce long-term catastrophe. Unlike chess, the “game” of reality does not present consistent, limited data or clear-cut success criteria.

This is why we remain far from achieving Artificial General Intelligence (AGI). Current Large Language Models (LLMs), like GPT-4, are trained by predicting the next word in text. Language, while richly expressive, is still much more constrained than open-ended reality. Its vocabulary and structures are finite, enabling LLMs to master pattern recognition and synthesis within textual data. But to surpass human intelligence in a general sense, AI would need to train on environments more complex than anything humans have fully understood, let alone digitized.

Here we encounter another often-overlooked factor: we underestimate the complexity of human intelligence itself. True, it might be easy to imagine an AI outcalculating the average person in certain tasks. But human society’s collective intelligence has achieved feats that beggar belief: we have journeyed into space, manipulated matter at scales smaller than atoms, and built microchips with billions of transistors. None of this was achieved by a single mind; it required thousands of layers of abstract reasoning developed across decades by some of our brightest thinkers, all building on the knowledge of those who came before. For an AGI to truly benefit human society at a meaningful level, it must not simply outpace the average individual — it must match or surpass the collaborative, cumulative genius of our entire civilization.

So far, the most remarkable AI advances beyond GPT-4’s capabilities have come from well-structured scientific and mathematical domains. Areas like mathematics, physics, chemistry, and coding present clearer rules and well-defined objectives. OpenAI’s O1 models, for instance, introduced around September, significantly improved reasoning in math and coding by carefully scaling inference compute and encouraging the model to think through multiple “layers” before answering. They sometimes take minutes to consider a complex problem, resulting in more accurate and inventive solutions.

Another landmark example is DeepMind’s AlphaFold, which tackled the protein folding problem. Though biological systems are complex, the prediction of protein structures still fits into a scientific framework with rigorous criteria for success. This allowed machine learning — augmented by reinforcement learning — to achieve a dramatic breakthrough in a domain long considered one of biology’s grand challenges.

However, the success in these specialized areas does not readily translate to the open complexity of social, economic, and moral decision-making. The presence of clearly defined goals and success metrics enables AI to improve and even exceed human capabilities in narrow tasks. But outside these niches, the constraints vanish and reliable evaluation becomes murky. Without structured data or a coherent method to judge progress, training AGI remains an elusive goal.

Conclusion

In the journey toward AGI, we find ourselves simultaneously closer than ever before and yet still profoundly distant. We can develop models that outperform humans on specialized tasks — chess, mathematics, protein folding, coding — and these feats grow more impressive by the day. But to reach true general intelligence, to surpass not just the average human but the collective intellectual might of our global society, AI must grapple with complexity that defies neat quantification. It’s one thing to solve equations or predict protein structures; it’s another to navigate the tangled webs of real life and achieve lasting, broadly beneficial outcomes.

This chasm will not be bridged by sheer scale of data or compute alone. It demands new training paradigms, careful evaluation methods, and perhaps entirely new understandings of what intelligence and knowledge really mean. Until we can successfully encode and navigate the open-ended intricacies of the real world — or develop creative methods that allow AI to learn and reason beyond predefined rules — AGI will remain a distant beacon, enticing and unreachable, shining just beyond the edge of our current understanding.

--

--

Rohit Raj
Rohit Raj

Written by Rohit Raj

Studied at IIT Madras and IIM Indore. Love Data Science

No responses yet