
A new investigation from The New Yorker lands at an uncomfortable moment for the AI industry.
At the exact time when companies like OpenAI are consolidating unprecedented power, the question is no longer just what AI will become but who gets to decide.
And in OpenAI’s case, that increasingly points to one person; Sam Altman.
For years, Altman has positioned himself as a careful steward of one of the most consequential technologies ever created. Under his leadership, OpenAI went from a relatively obscure research lab to the company at the centre of the global AI boom, powering everything from chatbots to enterprise systems and potentially, in the future, artificial general intelligence.
But the New Yorker’s reporting complicates that narrative.
Based on interviews, internal documents, and accounts from former colleagues, the piece surfaces a pattern of concerns that have followed Altman for years, questions about transparency, governance, and whether the same person pushing AI forward can also be trusted to slow it down when necessary.
At one point, OpenAI’s own leadership appeared to reach a breaking point. In 2023, the company’s board removed Altman, citing a lack of confidence in his candor—a move that shocked the tech world before being reversed just days later after pressure from employees and investors.
That episode now reads less like a one-off crisis and more like a preview.
What makes this moment different is scale.
OpenAI is no longer just another startup. It’s a company that could define how intelligence itself is built, distributed, and controlled. Its models are already embedded across industries, and its long-term ambition building systems that can reason, act, and potentially surpass human capabilities—places it in a category few companies have ever occupied.
That level of influence changes the stakes.
According to the reporting, some insiders worried that Altman downplayed safety concerns and sought to move faster than internal guardrails would allow, even as the company publicly emphasized its commitment to responsible AI.
Others described a leadership style that prioritized growth and positioning—particularly as OpenAI transitioned from a non-profit mission to a hybrid, profit-driven structure capable of attracting massive investment.
In isolation, those tensions might not be unusual for a fast-growing tech company.
In the context of AI, they feel different.
Because AI isn’t just another product cycle.
It’s a technology that could reshape labour markets, national security, and the global balance of power. Even Altman himself has acknowledged that advanced AI could introduce risks ranging from mass job displacement to large-scale misuse if not properly managed.
That creates a paradox at the centre of the industry.
The same companies building AI are also being asked to regulate it. The same leaders racing to deploy more powerful systems are expected to act as gatekeepers for their safe use.
And there is no clear external authority with the technical understanding—or the speed—to fully oversee them.
This is where the question of trust becomes unavoidable.
The New Yorker’s framing is blunt: if a handful of individuals may end up controlling systems that shape the future of humanity, then their judgment, incentives, and behaviour matter as much as the technology itself.
And those are not easily audited.
The reporting points to internal disagreements, strained relationships with former collaborators, and broader scepticism from within the AI community about how decisions are being made and who ultimately benefits.
At the same time, Altman continues to position himself as part of the solution proposing new economic frameworks, regulatory ideas, and global cooperation models to manage the impact of AI.
That dual role builder and regulator may be the most important tension of all.
None of this suggests a simple conclusion.
Altman remains one of the most influential figures in technology, and OpenAI continues to drive much of the industry’s progress. The company’s tools are widely used, its research is shaping the field, and its ambitions are aligned with what many believe is the next major phase of computing.
But the deeper question raised by this reporting isn’t about one person.
It’s about structure.
What happens when transformative technology is controlled by private companies? What happens when those companies move faster than governance frameworks can keep up? And what happens when trust becomes a prerequisite for safety but trust itself is contested?
For now, there are no clear answers.
Only a growing realization that the future of AI may depend less on what these systems can do and more on who is allowed to decide what they should do.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







