Jensen Huang thinks artificial general intelligence is closer than most people realize. Not in some distant, hazy future. Soon.
In a wide-ranging conversation on the Lex Fridman Podcast, the Nvidia CEO offered his most detailed public assessment yet of where AI is headed, how fast it’s accelerating, and what his company intends to do about it. The interview, which ran over two and a half hours, covered everything from chip architecture and energy consumption to the philosophical implications of machines that can reason. But the headline was this: Huang believes that by some reasonable definitions, AGI — the point at which AI matches or exceeds human cognitive ability across a broad range of tasks — could arrive within the next few years.
That’s a striking claim from the man running the company that supplies most of the world’s AI training hardware. It’s not the kind of thing you say casually when your market capitalization hovers near $3 trillion.
Huang was careful to qualify his prediction, as Mashable reported. The timeline depends entirely on how you define AGI, he said. If the benchmark is passing human professional exams — bar exams, medical licensing tests, logic assessments — then AI systems are already knocking on the door. Some are already through it. Large language models from OpenAI, Google DeepMind, and Anthropic have demonstrated the ability to score at or above human expert levels on a growing number of standardized tests. If you define AGI as something more expansive, as a system that can learn any intellectual task a human can with minimal instruction, the timeline stretches. But not by much, according to Huang. He suggested that even under more demanding definitions, the industry could see AGI-class systems emerge within five to ten years.
The interview landed at a moment when the AI industry is grappling with an identity crisis of sorts. Massive capital is flowing into foundation model companies and inference infrastructure, but questions about the actual return on that investment are growing louder. Wall Street analysts have started asking when the billions being spent on Nvidia GPUs will translate into proportional revenue for the companies buying them. Huang’s answer, implicitly, is that the payoff is enormous — but it requires patience and continued investment in compute.
He made the case on the podcast that computing demand will continue to grow exponentially. Not just because models are getting bigger, but because the nature of AI workloads is shifting. Training was the first wave. Inference — the process of running trained models to generate outputs in real time — is the second, and it’s far larger. Every chatbot query, every AI-generated image, every autonomous vehicle decision requires inference compute. And as AI agents become more capable and more embedded in enterprise workflows, the demand curve steepens dramatically.
This is where Nvidia’s business strategy locks in. The company isn’t just selling GPUs anymore. It’s selling an entire computing stack — chips, networking, software frameworks, and the Cuda programming model that ties it all together — designed to make AI development faster and cheaper at every layer. Huang has described this as “accelerated computing,” a term he’s been pushing for years but which has only recently gained traction as AI spending has exploded.
On the podcast, Huang also addressed the competitive threat from custom AI chips being developed by Nvidia’s biggest customers. Amazon, Google, Microsoft, and Meta have all invested heavily in designing their own silicon for AI workloads. Google’s TPUs have been in production for years. Amazon’s Trainium chips are gaining traction within AWS. Huang acknowledged these efforts but argued that Nvidia’s advantage lies in generality and software maturity. Custom chips, he said, tend to be optimized for narrow workloads. Nvidia’s GPUs are designed to handle a wide range of AI tasks efficiently, and the Cuda software layer — with millions of developers already trained on it — creates a switching cost that’s difficult to overcome.
He’s not wrong about the switching costs. But the competitive pressure is real and intensifying.
Recent reporting from Reuters has highlighted how hyperscalers are increasingly looking to diversify their chip supply chains, partly to reduce dependence on Nvidia and partly to optimize costs for specific inference workloads. The tension between Nvidia and its largest customers — who are simultaneously its biggest buyers and its most capable potential competitors — is one of the defining dynamics in the semiconductor industry right now.
Huang seemed unfazed. He told Fridman that competition validates the market. The more companies invest in AI silicon, the larger the total addressable market for accelerated computing becomes. And Nvidia, he argued, will continue to lead because of its pace of innovation. The company has committed to a one-year cadence for new GPU architectures, a punishing schedule that Huang described as necessary to stay ahead of exponentially growing demand.
The conversation turned philosophical when Fridman pressed Huang on what AGI would actually mean for society. Huang’s response was measured but optimistic. He described a future in which AI systems serve as intellectual collaborators — amplifying human capability rather than replacing it. He drew an analogy to the industrial revolution, noting that mechanization didn’t eliminate work but transformed it, creating entirely new categories of employment that hadn’t previously existed. AI, he suggested, would follow a similar pattern, though the transition would be faster and more disorienting.
Not everyone shares that optimism. Prominent AI researchers, including Geoffrey Hinton, have warned that the risks of AGI-class systems are substantial and potentially existential. Hinton, who left Google in 2023 partly to speak more freely about AI dangers, has argued that superintelligent systems could pursue goals misaligned with human values, with catastrophic consequences. Huang didn’t dismiss these concerns outright, but he framed them as engineering problems rather than insurmountable obstacles. Safety, he said, is a matter of building the right guardrails — and Nvidia is investing in tools to help developers do exactly that.
That framing — safety as an engineering challenge, not a fundamental barrier — is characteristic of Huang’s worldview. He is, at his core, a builder. And builders tend to believe that problems are solvable with enough ingenuity and compute.
The podcast also touched on Nvidia’s expanding role in robotics and physical AI. Huang has been talking up the concept of “embodied intelligence” — AI systems that don’t just process language and images but interact with the physical world through robotic systems. Nvidia’s Omniverse platform, a simulation environment for training robots and autonomous systems, is central to this vision. Huang described a future in which factories, warehouses, and even surgical suites are populated by AI-driven robots that learned their skills in simulated environments before ever touching a physical object.
This is not science fiction. Companies like Amazon and Tesla are already deploying robots trained using simulation-to-reality transfer techniques. But scaling this approach requires enormous compute resources — which, conveniently, is exactly what Nvidia sells.
The timing of Huang’s public commentary is worth examining. Nvidia recently reported quarterly earnings that once again exceeded Wall Street expectations, with data center revenue surging on the back of AI demand. But the stock has been volatile, reflecting investor uncertainty about the sustainability of the current spending boom. By articulating a clear, aggressive vision for AGI and the compute infrastructure it requires, Huang is making a case directly to investors: this isn’t a bubble. The demand is structural, and it’s just getting started.
Whether he’s right will depend on variables that even Huang can’t fully control. Regulatory environments are shifting. The European Union’s AI Act is already imposing new compliance requirements on foundation model developers. The United States is debating its own framework. China is pursuing AI development with massive state backing but under different rules. Geopolitical tensions over chip exports — particularly the U.S. restrictions on selling advanced Nvidia GPUs to China — add another layer of complexity.
Huang addressed the China situation briefly on the podcast, acknowledging that export controls have created challenges but insisting that Nvidia will comply with all regulations while continuing to serve global markets. It was a diplomatic answer. The reality is messier. Nvidia has had to design stripped-down chip variants specifically for the Chinese market, and there are ongoing concerns that restricted chips are reaching Chinese entities through intermediaries in other countries.
So where does this leave the industry? Huang’s vision is coherent and compelling: AGI is approaching, it will require staggering amounts of compute, and Nvidia is positioned to supply that compute better than anyone else. The logic is self-reinforcing in a way that benefits Nvidia enormously. And to his credit, the company has executed brilliantly over the past several years, anticipating the AI boom before most of its competitors and building the products to capture it.
But coherent visions don’t always survive contact with reality. The history of technology is littered with confident predictions about timelines that proved wildly optimistic. AGI predictions, in particular, have a long track record of being premature. Researchers have been forecasting human-level AI “within 20 years” for the past 60 years.
What’s different now — and this is the part that makes Huang’s argument harder to dismiss — is the empirical evidence. AI systems are demonstrably more capable than they were even 18 months ago. The rate of improvement is not linear. It’s compounding. And Nvidia’s GPUs are the substrate on which nearly all of that improvement is being built.
Jensen Huang isn’t just predicting the future. He’s selling it. And right now, the market is buying.
Jensen Huang Says AGI Is Coming Fast — And Nvidia Is Building the Road It Travels On first appeared on Web and IT News.
European bureaucrats once tapped out urgent notes on WhatsApp and Signal. No more. Governments across…
Europe stares down a fuel abyss. Fatih Birol, head of the International Energy Agency, laid…
A compact electric motor roughly the size of a 12.5 kg gas cylinder now delivers…
Bank of America kicked off a wave of strong results from the nation’s largest lenders.…
CarMax swung to a $120.7 million net loss in its fiscal fourth quarter, a stark…
National Public Radio snagged $113 million in private donations this week, a cash infusion headlined…
This website uses cookies.