Originally published in The New Yorker, a landmark investigation by Ronan Farrow and Andrew Marantz into Sam Altman and OpenAI raises questions that go far beyond Silicon Valley boardroom drama. Here is what the rest of us should be paying attention to.
There is a particular kind of power that operates best in the gap between what people believe and what is actually true. Ronan Farrow and Andrew Marantz’s sweeping investigation into Sam Altman, published this month in The New Yorker, is not simply a profile of a tech executive. It is a portrait of how the most consequential technology in human history is being built, funded, and deployed — and of the profound absence of accountability surrounding that process.
For readers outside Silicon Valley, the details may feel remote. Boardroom coups, venture capital disputes, disappearing messages between scientists. But strip away the insider texture and what remains is something that should concern every person who uses the internet, pays taxes, or lives in a democracy: a small group of individuals, accountable to almost no one, are making decisions that will reshape every aspect of human life, and the mechanisms we might expect to keep them in check are not working.
The Safety Myth
Perhaps the most important thread running through the investigation is the systematic dismantling of OpenAI’s founding safety commitments. The company was established on an extraordinary premise: that it was building something so potentially dangerous that profit could never be the primary motive, and that the wellbeing of humanity had to take legal precedence over the company’s own survival.
That premise has been quietly abandoned. Safety teams have been dissolved. Researchers who raised alarms were sidelined or departed. The language of existential risk, once used to recruit idealistic scientists and secure philanthropic funding, appears to have been repurposed as a marketing tool — deployed when useful, set aside when inconvenient.
This matters to ordinary people because the reassurances we have been given about AI — that the people building it take the risks seriously, that guardrails exist, that someone responsible is in charge — appear to rest on increasingly shaky foundations. When AI systems are now being integrated into military operations, immigration enforcement, domestic surveillance and autonomous weaponry, the question of who is accountable for their behaviour is not abstract. It is urgent.
The Regulation Trap
One of the more striking revelations in the piece concerns the gap between Altman’s public posture on regulation and his private conduct. He became something of a darling among lawmakers precisely because he appeared to welcome oversight — a refreshing contrast to the defensive crouch of social media executives before him. Senators were charmed. Editorialists were reassured.
Behind the scenes, however, the picture looks rather different. Legislation that would have mandated safety testing was reportedly lobbied against privately, even as it was publicly supported. Legal tools were deployed against individuals who had helped draft safety bills. Financial relationships appear to have been used to create dependencies that made independent criticism difficult.
This is the regulatory trap that citizens and policymakers need to understand: the performance of accountability is not the same as accountability. An executive who testifies eloquently before a Senate committee while simultaneously working to neutralise the legislation being proposed in that very committee is not a good-faith participant in democratic oversight. They are managing it.
The Money Trail
The investigation raises serious questions about where the capital fuelling the AI boom is coming from and what conditions are attached to it. Gulf state sovereign wealth funds, with their vast resources and their proximity to authoritarian governments, have become central to the financial architecture of American AI development.
This is not a matter of abstract geopolitics. Data centres, once built, become infrastructure. Infrastructure, once established, creates dependencies. And dependencies, once entrenched, are extraordinarily difficult to dislodge. When national security officials express concern about concentrating advanced AI capacity in regions with a history of technology transfer to adversaries, those concerns deserve serious public debate — not quiet management by a single executive pursuing funding at any cost.
The competition between AI companies for capital has created a race to the bottom that mirrors, in some ways, the early years of social media. Then, the pressure to grow at all costs led platforms to optimise for engagement over truth, with consequences we are still living with. Now, the pressure to secure the computational resources necessary to train ever-larger models is creating financial entanglements that may compromise both safety and national security.
What Loyalty Costs
A recurring theme in the investigation is the way financial relationships create personal dependencies. Investors, partners, and associates who might otherwise provide independent judgment find themselves entangled in ways that make criticism costly. This pattern, which the investigation documents in considerable detail, is not unique to AI — it is a feature of concentrated power in any industry.
But it has particular implications in a sector where independent expert voices are urgently needed. If the researchers who understand these systems best are financially connected to the people running them, the independence of that expertise is compromised. If the investors who might otherwise demand accountability are co-invested alongside the executives they are supposed to oversee, the checks that markets are supposed to provide simply do not function.
For those of us who rely on journalists, academics and public interest advocates to help us understand what is happening in AI, this should prompt some careful thinking about funding and independence. Who is paying for the analysis you are reading? What relationships exist between the institutions producing it and the companies being analysed?
The Democratic Stakes
Ultimately, what the Farrow-Marantz investigation illuminates is a governance crisis. The technology being developed by a handful of companies in San Francisco will determine the future of work, of warfare, of the information environment, and of the relationship between citizens and states. And the decisions being made about how that technology is built, deployed, and controlled are being made almost entirely outside democratic structures.
This is not a partisan point. It is a structural one. The question of what values should be encoded in AI systems — whose interests they should serve, what constraints they should operate under, how failures should be remedied — are fundamentally political questions. They require democratic deliberation, public accountability, and genuine institutional oversight.
What we have instead, at least as depicted in this investigation, is a system in which the people building the technology set the rules, police themselves, and — when the rules become inconvenient — rewrite them.
What to Watch For
For readers trying to navigate this landscape, a few things are worth tracking closely.
Watch for the gap between public commitments and private conduct. When AI companies make announcements about safety, look for independent verification rather than accepting the announcement at face value. Ask who is checking the claim and what incentives they have to check it honestly.
Watch the money. The funding relationships in AI are complex, but they are increasingly visible. When a company announces a major investment, ask where the capital is coming from, what conditions are attached, and what the investor expects in return.
Watch what happens to people who raise concerns internally. The pattern documented in this investigation — in which employees who flagged safety issues were marginalised, and the teams dedicated to those issues were dissolved — is a warning sign worth heeding across the industry.
Watch the regulatory environment. The shift in Washington toward dismissing safety concerns as anti-competitive hand-wringing represents a significant change in the political landscape. Pay attention to which politicians are receiving funding from AI-aligned donors, and what positions they then take on oversight legislation.
And watch, above all, for the deployment of AI in contexts where accountability is hardest to establish: military operations, law enforcement, border control, and critical infrastructure. These are precisely the areas where the consequences of failure are most severe and where the public’s ability to scrutinise what is happening is most limited.
The story Farrow and Marantz have told is not a simple one of villains and heroes. It is a story about what happens when the pressure of competition, the scale of capital, and the absence of effective oversight converge around a technology that its own creators have described as potentially the most powerful — and dangerous — in human history.
Whether that story ends well depends largely on whether the public, and the institutions that represent them, decide to pay attention while there is still time to matter.
This piece is original commentary and analysis inspired by reporting in The New Yorker, April 13, 2026. It does not reproduce the original article.
