Sam Altman admitted he got it wrong. Not about artificial intelligence itself — he still believes that’s the most consequential technology humanity will ever build — but about the people he assumed would misuse it. Speaking at a recent event, the OpenAI CEO said he had “miscalibrated” his distrust, directing too much suspicion toward the U.S. government and military while underestimating threats from foreign adversaries and other actors. It was a striking confession from the man who once positioned OpenAI as the conscience of the AI industry.
The remarks, reported by Business Insider, came as OpenAI deepens its relationship with the Pentagon and the broader national security apparatus — a relationship that would have been unthinkable just two years ago. OpenAI had, until early 2024, explicitly banned military and warfare applications in its usage policies. That prohibition is gone now, replaced by partnerships with defense contractors and direct engagement with the Department of Defense.
This isn’t a subtle repositioning. It’s a full-throated reversal.
Altman’s comments reflect a broader ideological shift inside OpenAI, one that has accelerated as the company has transformed from a nonprofit research lab into one of the most valuable private companies on the planet. The company’s valuation reportedly sits north of $300 billion after its latest funding round. That kind of money changes the calculus. So does the geopolitical competition with China, which has poured enormous state resources into AI development and shown little interest in the kind of safety constraints that once defined OpenAI’s public identity.
According to Business Insider, Altman specifically referenced the Pentagon deal as evidence of his recalibrated worldview. He suggested that the U.S. government and military are not the entities most likely to abuse AI — a position that puts him at odds with many of the researchers and ethicists who helped build OpenAI’s early reputation. Some of those people are no longer at the company. The departures of key safety-focused employees over the past year, including co-founder Ilya Sutskever and former head of alignment Jan Leike, have been widely documented.
The timing of Altman’s public remarks is not coincidental. OpenAI has been actively courting government contracts at a moment when the Trump administration has signaled strong interest in accelerating AI adoption across federal agencies, particularly within the defense and intelligence communities. The administration has rolled back Biden-era AI safety executive orders and embraced a more permissive regulatory posture, creating favorable conditions for companies willing to work with the military.
OpenAI’s shift toward defense work began in earnest in January 2024, when the company quietly updated its usage policy to remove the blanket prohibition on military applications. At the time, a spokesperson said the change was meant to allow for certain national security use cases that aligned with OpenAI’s mission, such as cybersecurity and veteran support. But the door, once opened, swung wide. Within months, OpenAI had announced partnerships with defense technology firms and begun discussions with the Pentagon about integrating its models into military workflows.
The company isn’t alone in this pivot. Anthropic, Google DeepMind, and other major AI labs have all engaged with defense and intelligence agencies to varying degrees. But OpenAI’s trajectory is the most dramatic because its starting point was the most idealistic. Founded in 2015 with a charter emphasizing that artificial general intelligence should “benefit all of humanity,” OpenAI spent years cultivating an image as the responsible steward of a dangerous technology. That image is harder to maintain when your models are being evaluated for military applications.
Altman’s framing — that he was too suspicious of the wrong parties — is a clever rhetorical move. It reframes the policy reversal not as a capitulation to commercial pressure but as a correction of a moral error. He wasn’t naive about AI risks, the argument goes. He was just pointing his concern in the wrong direction. The real threats, in this telling, come from China, from Russia, from non-state actors who won’t observe any safety norms at all. Better to have the Pentagon using OpenAI’s models, with whatever guardrails the company can negotiate, than to cede that ground to adversaries.
There’s a logic to this position that many in Washington find compelling. And Altman has become an increasingly effective operator in the capital, meeting regularly with lawmakers and administration officials, testifying before Congress, and positioning himself as a partner rather than a disruptor. His political instincts have sharpened considerably since OpenAI’s early days.
But the argument has holes.
Critics point out that the history of military technology adoption is littered with examples of tools developed for defensive purposes being repurposed for offensive ones. The same AI model that analyzes satellite imagery for threat detection can be used to identify targets for drone strikes. The same natural language processing system that helps analysts summarize intelligence reports can be used to generate propaganda. Altman’s assurance that OpenAI will maintain ethical boundaries rings hollow to those who watched the company abandon its nonprofit structure, purge its safety team, and rewrite its usage policies in the span of eighteen months.
The financial incentives are enormous. Government contracts, particularly in defense, represent a massive and reliable revenue stream — exactly what a company burning through billions in compute costs desperately needs. OpenAI reportedly spends more than $7 billion annually on the computing infrastructure required to train and run its models. Its consumer products, while popular, haven’t yet generated the kind of revenue needed to justify its valuation. Enterprise contracts help. Government contracts help more.
There’s also the competitive angle. If OpenAI doesn’t take Pentagon contracts, someone else will. Palantir, Anduril, and a growing constellation of defense-focused AI startups are already deeply embedded in military procurement. Scale AI has built a significant business providing data labeling and AI infrastructure to the Department of Defense. For OpenAI to remain on the sidelines would mean watching competitors gain access to government data, government funding, and government relationships that could shape the future of the industry.
Altman seems to have concluded that the risks of abstaining outweigh the risks of participation. That’s a defensible position in a world where AI capabilities are advancing rapidly and geopolitical competition is intensifying. But it’s a far cry from the company’s founding ethos, and the speed of the transformation has left even sympathetic observers uneasy.
Inside OpenAI, the shift has produced real tension. Several current and former employees have spoken, mostly anonymously, about their discomfort with the company’s direction. The safety team, once a central pillar of OpenAI’s organizational identity, has been restructured and, by some accounts, marginalized. The board, reconstituted after the dramatic ouster and reinstatement of Altman in November 2023, is now stacked with members more sympathetic to commercial expansion and less inclined to pump the brakes on growth.
The nonprofit entity that originally governed OpenAI is being wound down as part of a conversion to a for-profit structure — a process that has drawn scrutiny from state attorneys general and generated lawsuits from Elon Musk, a co-founder who has accused Altman of betraying the organization’s original mission. Musk’s critiques are complicated by his own interests — he runs xAI, a direct competitor — but the underlying complaint resonates with many who were drawn to OpenAI precisely because it promised to be different from a conventional tech company.
Altman’s “miscalibration” framing also raises a question he may not want to answer: If he was wrong about where to direct his distrust before, how confident should anyone be that he’s calibrated correctly now? The same intellectual humility that makes the admission appealing also undermines the certainty with which he’s embracing the new direction. If the lesson is that smart people can be wrong about who deserves trust, that lesson applies to the current set of partnerships too.
The broader AI industry is watching this closely. OpenAI’s willingness to work with the Pentagon gives political cover to other companies considering similar moves. It normalizes defense work in a sector that has historically been squeamish about it — a legacy of the Google employee protests in 2018 that killed Project Maven, the Pentagon’s first major AI initiative. The cultural norms of Silicon Valley have shifted dramatically since then, driven by a combination of geopolitical anxiety, commercial opportunity, and the waning influence of the tech workforce’s more progressive elements.
Washington, for its part, has been eager to reciprocate. The Defense Department has accelerated its AI adoption efforts through initiatives like the Chief Digital and Artificial Intelligence Office, and Congress has shown bipartisan interest in ensuring American companies maintain their lead over Chinese competitors. The political environment for AI-defense partnerships hasn’t been this favorable in years, possibly ever.
So Altman is pushing on an open door. The question is what comes through it.
OpenAI’s models are extraordinarily capable — and becoming more so with each generation. GPT-4 and its successors can process and generate text, images, code, and audio with a fluency that would have seemed like science fiction five years ago. Applied to military contexts, these capabilities could transform intelligence analysis, logistics, communications, and decision-making at every level of command. They could also be used in ways that raise profound ethical and legal questions about autonomous weapons, surveillance, and the role of human judgment in lethal decisions.
Altman has said repeatedly that OpenAI will not build autonomous weapons. But “autonomous weapons” is a term with fuzzy boundaries, and the line between a decision-support tool and an autonomous system gets blurrier as AI capabilities improve. A model that recommends a course of action with 99% accuracy and a two-second response time creates enormous pressure on human operators to simply approve the recommendation. The human is technically in the loop. In practice, the machine is making the call.
These are not hypothetical concerns. They are active debates within the Pentagon, at NATO, and at the United Nations. And they’re debates that OpenAI, by entering the defense space, will now have a direct stake in shaping. That’s either reassuring or alarming, depending on how much faith you place in a company that has rewritten its own rules with remarkable speed.
Altman’s candor about his earlier miscalibration is, in one sense, refreshing. Tech executives rarely admit to being wrong about anything. But candor about past errors doesn’t guarantee wisdom about present choices. And the stakes of these particular choices — involving the world’s most powerful military and the world’s most capable AI systems — are as high as they get.
What’s clear is that the OpenAI of 2026 bears little resemblance to the OpenAI of 2020, or even 2023. The charter still exists on the company’s website. The words “benefit all of humanity” are still there. But the meaning of those words has been stretched and reinterpreted to accommodate a business model, a geopolitical posture, and a set of partnerships that the founders almost certainly did not envision. Whether that evolution represents pragmatic maturity or mission drift depends entirely on where you stand — and how much you trust the people making the decisions.
Sam Altman is asking the world to trust him. Again. He’s asking it to believe that this time, his calibration is right. That the Pentagon is a worthy partner. That the safeguards will hold. That the technology will be used wisely. It’s a big ask. And the answer will unfold not in press conferences or policy documents, but in the classified programs and operational deployments that the public may never fully see.
Sam Altman’s Pentagon Pivot: How OpenAI Went From AI Safety Champion to Defense Contractor first appeared on Web and IT News.
Ford Motor Company sold fewer vehicles in the first quarter of 2025 than it did…
When Tim Cook was asked about Apple’s plans for smart glasses during a recent interview,…
Britain’s biggest banks are spending billions of pounds a year keeping decades-old technology alive —…
For decades, American brands carried an almost magnetic allure in China. Nike sneakers, Apple iPhones,…
Starbucks is about to do something it hasn’t done in its 54-year history: pay its…
Somewhere inside IBM’s sprawling research apparatus, engineers are working on something that would have seemed…
This website uses cookies.