What is AGI?

5 min read
What is AGI?

Artificial General Intelligence, or AGI, refers to an AI with broad, human-level cognitive abilities across diverse tasks. In simple terms, it's the kind of AI that could understand or learn anything a human can – not just excel at one narrow task like playing chess or generating text.

But here's the thing: not everyone even agrees on what AGI means. Cognitive scientist Melanie Mitchell points out the term has been used "in so many different ways it's almost lost any rigorous meaning". Some researchers disregard the term entirely. Yet it's become the north star for every major AI lab, so let's dig in.

A Quick History Lesson

The concept traces back to Alan Turing's 1950 "imitation game" – what we now call the Turing Test. His elegantly simple idea: if a machine can convince a human it's human through conversation, it's demonstrating intelligence. Turing predicted that by 2000, machines would fool humans 30% of the time. He wasn't far off.

John McCarthy coined "artificial intelligence" at the 1956 Dartmouth Conference, hypothesizing that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Nearly 70 years later, we're still working on that precise description.

Wait, Don't We Already Have AGI?

Modern AI systems are crushing benchmarks. GPT-4 scores in the 90th percentile on the SAT, passes the bar exam in the top 10%, and beats most humans at coding challenges. These systems have arguably passed the Turing Test – they regularly fool humans in conversation.

So why are the leaders of OpenAI and Google DeepMind saying we're not there yet?

The Missing Pieces

Sam Altman: It's About Autonomy

Altman boldly stated in January 2025: "We are now confident we know how to build AGI as we have traditionally understood it." But he admits current systems, even GPT-5, aren't AGI because they "lack the ability to learn autonomously from new information in real time."

OpenAI defines AGI as "highly autonomous systems that outperform humans at most economically valuable work." Current AI operates in short bursts within limited context windows – they can't continuously learn and adapt like humans do throughout a workday, let alone a career.

Demis Hassabis: Where's the Curiosity?

DeepMind's CEO takes a different angle. On 60 Minutes, Hassabis highlighted that advanced AI systems haven't yet shown curiosity or the ability to formulate entirely new questions. They solve problems within familiar patterns but "don't originate new scientific conjectures".

His test for AGI: could an AI system have discovered general relativity with the same information Einstein had in the 1900s? It's not about solving known problems – it's about generating breakthrough insights.

Yann LeCun: Wrong Approach Entirely

Meta's chief AI scientist is the field's loudest skeptic. "There's absolutely no way that autoregressive LLMs, the type that we know today, will reach human intelligence," he declared at CES 2025.

LeCun argues the entire AGI framing is wrong – human intelligence isn't general but "extremely specialized," making AGI a misleading target. He advocates for "objective-driven AI" that learns about the physical world through sensors and video, building genuine world models rather than statistical text patterns.

Of course, it's worth noting that Meta hasn't exactly been impressive with their AI efforts to date. Been pretty underwhelmed with my Meta sunglasses, I don't know about anyone else.

So When Will AGI Arrive?

While predictions vary on the exact timing, what's clear is that everyone says we're going to get there relatively soon. The disagreement is about whether it's a matter of years or decades:

  • Sam Altman (OpenAI): 2025-2029, with AI agents "joining the workforce" this year
  • Dario Amodei (Anthropic): Rejects the term AGI but sees "powerful AI" arriving as early as 2026 – systems smarter than Nobel Prize winners that can run millions of instances at 10-100x human speed
  • Demis Hassabis (DeepMind): 5-10 years for genuine AGI
  • Yann LeCun (Meta): Beyond a decade with current approaches, if ever

Academic surveys generally push timelines to 2040-2050, while industry leaders cluster around 2025-2030.

The Real Questions

Rather than endlessly debating definitions, let me suggest more interesting questions to focus on:

What's Different Today?

The pace of AI development has become genuinely unprecedented. Since 2010, the computing power used to train AI models has grown by 10 billion times, with a doubling time of just 3.4-6 months. Compare that to Moore's Law, which doubled transistor density every two years.

OpenAI's analysis shows we've gone from a steady 2-year doubling time before 2012 to this explosive 3.4-month cycle. To put this in perspective: GPT-4 used 100 million times more compute than AlexNet, the breakthrough computer vision system from just 11 years earlier.

And it's not just raw compute. Algorithm improvements contribute the equivalent of doubling compute every 9 months. The combination of hardware acceleration and algorithmic innovation creates a compounding effect that's reshaping what's possible quarter by quarter, not decade by decade.

What Will This Intelligence Be Capable Of?

This brings us to the trillion-dollar questions: Can it solve climate change? Cure cancer? End poverty? The answer is we don't know, but the potential is staggering. An intelligence that can operate at superhuman levels across all domains, running millions of instances in parallel, could tackle problems we've struggled with for centuries.

But capabilities are only half the equation...

If you made it this far, please consider signing up for my weekly newsletter. It's currently free and takes just 15 seconds — click here.

How Realistic Are the Risks?

The same leaders racing toward AGI are increasingly vocal about risks. Geoffrey Hinton left Google to warn about existential dangers, saying he now believes machines could be smarter than us sooner than he thought. Yoshua Bengio calls for solving the control problem before we achieve human-level AI, warning that current AI development could lead to systems that "turn against humans". Even Sam Altman, while optimistic, acknowledges that getting alignment wrong could be catastrophic and has called for global coordination to slow down at critical junctures.

The risks range from near-term (job displacement, misinformation that could "undermine democracy", autonomous weapons) to existential (loss of human agency, unaligned superintelligence). The challenge is that we're building something smarter than us – how do you control something that can outthink you?

An Unprecedented Time

This is an unprecedented moment to be alive – a time I never thought I'd witness, or assumed I'd be very old when it arrived. We're potentially on the cusp of the most significant transformation in human history.

It's something I'm devoting serious time to understanding, and I think everyone should. Not just learning about the technology, but actively adopting it, experimenting with it, and most importantly, making sure our children and the next generation are prepared for a world that will look radically different from the one we grew up in.

Whether AGI arrives in 2025 or 2050, whether we call it AGI or something else entirely, the trajectory is clear: AI capabilities are accelerating faster than our ability to fully comprehend their implications. The question isn't if we'll achieve human-level AI, but what we'll do when we get there.

What are your thoughts? Drop a comment below – I'd love to hear how you're thinking about and preparing for this transformation.