March 27, 2026

John Carmack doesn’t sugarcoat things. The legendary programmer — co-creator of Doom, former CTO of Oculus VR, and now full-time AI entrepreneur — posted a stark assessment on X that cuts against the breathless optimism saturating Silicon Valley’s artificial intelligence discourse. His message, delivered with the matter-of-fact directness that has defined his public commentary for decades, amounts to a cold shower for anyone expecting artificial general intelligence to arrive on a neat corporate timeline.

“We are not on the brink of AGI,” Carmack wrote on X. The statement landed with particular force because Carmack isn’t an AI skeptic. He’s betting his career on it. He left Meta in 2022 to found Keen Technologies, an AI startup pursuing AGI with what he’s described as a lean, efficiency-obsessed approach. When someone who has staked everything on building AGI tells you it’s not around the corner, the industry should listen.

Sponsored

Carmack’s post elaborated on his position with characteristic precision. He acknowledged the remarkable capabilities of current large language models but drew a firm line between impressive pattern matching and genuine general intelligence. The distinction matters enormously — not just philosophically, but commercially. Billions of dollars in venture capital and corporate R&D spending are being allocated based on assumptions about how quickly AI systems will achieve human-level reasoning across domains. If those timelines are wrong, the financial consequences will be severe.

This isn’t a fringe position, but it is an increasingly lonely one among AI company founders. The prevailing narrative from OpenAI, Anthropic, Google DeepMind, and others has been one of accelerating capability curves and shrinking timelines. Sam Altman has suggested AGI could arrive within a few years. Dario Amodei of Anthropic published a lengthy essay in late 2024 describing a world transformed by powerful AI systems within the decade. Demis Hassabis at Google DeepMind has offered similarly ambitious projections.

Carmack sees it differently. And his track record of engineering judgment gives that dissent real weight.

The tension between AI optimists and realists has been building for months. Scaling laws — the empirical observation that larger models trained on more data with more compute tend to perform better — have been the theological foundation of the current AI boom. But cracks have appeared. Reports from Reuters in late 2024 indicated that several leading AI labs were encountering diminishing returns from simply scaling up existing architectures. The easy gains from making models bigger appeared to be plateauing, at least for certain benchmarks and capabilities.

This doesn’t mean progress has stopped. Far from it. OpenAI’s o1 and o3 reasoning models, Google’s Gemini 2.5, and Anthropic’s Claude series have all demonstrated meaningful improvements in complex reasoning tasks. But there’s a difference between steady, impressive progress and the exponential takeoff that AGI timeline predictions implicitly assume. Carmack seems to be pointing precisely at that gap.

His approach at Keen Technologies reflects this more measured view. Rather than raising enormous war chests to build ever-larger foundation models — the strategy favored by well-capitalized competitors — Carmack has emphasized efficiency, smaller teams, and architectural innovation. He raised $20 million in initial funding, a rounding error compared to the billions flowing into OpenAI, Anthropic, and xAI. The bet is that cleverness in design can substitute for brute-force compute, at least to a degree.

It’s a contrarian bet in a market drunk on scale.

The broader AI industry finds itself in an awkward position. Public companies have committed staggering sums to AI infrastructure. Microsoft, Google, Amazon, and Meta collectively plan to spend well over $200 billion on capital expenditures in 2025, much of it directed toward data centers and AI hardware. These investments are predicated on the assumption that AI capabilities will continue improving rapidly enough to generate returns that justify the spending. If Carmack is right that AGI remains distant, the question becomes whether the intermediate capabilities — better coding assistants, more capable customer service bots, improved search — can carry the financial burden of that infrastructure buildout.

Wall Street has started asking this question more pointedly. After an initial period of uncritical enthusiasm, analysts at firms including Goldman Sachs and Sequoia Capital have published research questioning whether AI revenue growth can match the pace of capital deployment. The gap between AI spending and AI revenue generation remains wide. Not catastrophically so — enterprise adoption of AI tools is genuinely accelerating — but wide enough to make Carmack’s timeline skepticism financially relevant.

There’s also the definitional problem. What counts as AGI? The term has become so loaded with marketing significance that its technical meaning has blurred. OpenAI’s charter defines it as “highly autonomous systems that outperform humans at most economically valuable work.” By that standard, we’re clearly not close. Current AI systems excel at specific tasks — generating text, writing code, analyzing images — but fail unpredictably at others. They lack persistent memory in any meaningful sense. They can’t reliably plan across long time horizons. They hallucinate facts with confident fluency.

Carmack has been consistent on this point. In previous public statements, he’s emphasized that the path to AGI likely requires fundamental architectural innovations beyond the transformer models that dominate current AI. He’s not dismissing transformers — he’s argued they’re genuinely impressive — but he’s skeptical that scaling them alone will bridge the gap to general intelligence. This puts him at odds with the “scaling is all you need” school of thought that has driven much of the industry’s investment thesis.

Sponsored

Some recent developments lend credibility to his position. Despite enormous increases in training compute, frontier models still struggle with tasks that require genuine causal reasoning, novel problem-solving in unfamiliar domains, or robust common sense. Benchmarks keep getting saturated, but real-world reliability improvements have been more incremental. The AI systems that perform best tend to do so in narrow, well-defined contexts — exactly the pattern you’d expect from sophisticated pattern matching rather than general intelligence.

But the counterargument is real, too. Proponents of aggressive timelines point to the pace of improvement in reasoning benchmarks, the emergence of agentic AI systems that can execute multi-step tasks, and the potential for new training paradigms like reinforcement learning from human feedback and synthetic data generation to extend scaling curves. They argue that dismissing the possibility of rapid progress toward AGI is just as speculative as predicting it.

Carmack would probably agree with that framing to some extent. His post wasn’t a prediction that AGI is impossible or even that it’s decades away. It was a corrective. A recalibration. The AI industry has developed a tendency to conflate rapid progress with imminent arrival at the destination, and Carmack is pointing out that the destination may be much further than the current trajectory suggests.

This matters for more than just investor returns. Policy decisions about AI regulation, workforce planning, education reform, and national security strategy are all being shaped by assumptions about AI timelines. If policymakers believe AGI is five years away, they’ll make very different choices than if they believe it’s twenty or thirty years out. Carmack’s voice adds a credible data point to the more conservative end of that spectrum.

His credibility on technical matters is hard to overstate. Carmack is one of the few living engineers whose contributions have fundamentally shaped multiple technology industries. His work on 3D graphics algorithms in the 1990s enabled the first-person shooter genre and influenced GPU architecture for decades. At Oculus and later Meta, he drove critical advances in virtual reality rendering and mobile optimization. When he speaks about the difficulty of a technical problem, he’s drawing on forty years of solving problems that other people said were impossible.

So when he says AGI isn’t imminent, it’s not defeatism. It’s engineering realism from someone who understands both the power and the limits of software systems at a level few others can claim.

The AI industry would benefit from more of this kind of honesty. The hype cycle around artificial intelligence has reached a pitch where measured assessment gets drowned out by promotional noise. Every new model release is framed as a leap toward superintelligence. Every benchmark improvement is presented as evidence that the singularity is near. Meanwhile, the actual researchers and engineers doing the work — many of whom share Carmack’s skepticism privately — are incentivized to stay quiet or play along.

Carmack doesn’t have those incentives. He’s building a company, yes, but he’s structured it in a way that doesn’t require AGI to arrive on schedule to survive. Keen Technologies is pursuing useful AI capabilities along the way, not betting everything on a single breakthrough moment. That gives him the freedom to say what he actually thinks.

And what he thinks is clear: the hard problems of artificial general intelligence remain unsolved, the current approaches may not be sufficient to solve them, and the industry’s timeline projections are more aspiration than engineering estimate. None of this means the work isn’t worth doing. All of it means the work is harder than the marketing suggests.

For investors, executives, and policymakers trying to make sense of the AI moment, Carmack’s assessment deserves serious consideration. Not because he’s necessarily right about the timeline — nobody knows the timeline — but because his willingness to state an uncomfortable truth is itself a signal. When the smartest engineer in the room says the problem is harder than everyone thinks, the smart move is to plan accordingly.

John Carmack’s Blunt Verdict on AI Progress: ‘We Are Not on the Brink of AGI’ first appeared on Web and IT News.

Leave a Reply

Your email address will not be published. Required fields are marked *