Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The New Yorker In-Depth Investigation: Why Do OpenAI Insiders Consider Altman Untrustworthy?
On a nonprofit corpse, a money tree grows.
Byline: Xiao Bing, Deep Tide TechFlow
In the fall of 2023, OpenAI Chief Scientist Ilya Sutskever sits in front of his computer and completes a 70-page document.
This document is compiled from Slack message records, HR communication files, and internal meeting minutes—only to answer one question: Sam Altman, the man who controls what may be the most dangerous technology in human history, can he truly be trusted?
Sutskever’s answer, written on the document’s very first page, first line, has the list title: “Sam demonstrates a consistent pattern of behavior…”
First: Lying.
Two and a half years later, today, investigative journalists Ronan Farrow and Andrew Marantz published a super long-form investigation in The New Yorker. After interviewing more than 100 people involved, they obtained internal memos that had never been made public before, and also acquired more than 200 pages of private notes left behind by Dario Amodei, the founder of Anthropic, from his time at OpenAI. The story pieced together from these documents is far uglier than the 2023 “palace intrigue”: how OpenAI, step by step, turned from a nonprofit organization created for human safety into a commercial machine—almost every safety safeguard was dismantled by the same person, with his own hands.
Amodei’s conclusion in the notes is even more direct: “OpenAI’s problem is Sam himself.”
OpenAI’s “original sin” setup
To understand the weight of this report, you first need to clarify just how special OpenAI is.
In 2015, Altman and a group of Silicon Valley elites did something that has almost no precedent in business history: using a nonprofit organization to develop what could be the most powerful technology in human history. The board’s responsibilities are spelled out very clearly: safety comes before the company’s success, even before the company’s survival. In plain terms, if one day OpenAI’s AI becomes dangerous, the board has an obligation to shut down the company themselves, personally.
The entire structure is betting on one assumption: the person who controls AGI must be an extremely honest person.
What if they bet wrong?
The core bomb in the report is that 70-page document. Sutskever doesn’t play office politics—he is one of the world’s top AI scientists. But by 2023, he became increasingly certain of one thing: Altman has been continuously telling lies to executives and the board.
A specific example: In December 2022, Altman assured the board during a board meeting that multiple features of the upcoming GPT-4 had already passed safety review. Board member Toner asked to see the approval documents, only to find that the two most controversial features (user-customized fine-tuning and personal assistant deployments) had never received approval from any safety panel in the first place.
Even stranger things happened in India. An employee reported to another board member about “that violation”: Microsoft hadn’t completed the necessary safety review, yet released an early version of ChatGPT in India ahead of schedule.
Sutskever also recorded another matter in the memo: Altman told the former CTO Mira Murati that the safety approval process wasn’t that important, and that the company’s General Counsel had already approved it. Murati went to confirm with the General Counsel, who replied, “I don’t know where Sam got that impression from.”
Amodei’s 200-page-plus private notes
Sutskever’s document reads like a prosecutor’s indictment. Amodei’s 200-plus pages of notes left behind are more like a diary written by a witness at the crime scene.
In the years when Amodei worked at OpenAI as the head of security, he watched the company step back, one step at a time, under commercial pressure. In his notes, he recorded a key detail from the 2019 Microsoft investment deal: he had inserted a “merger and assistance” clause into OpenAI’s charter, roughly stating that if another company found a safer AGI path, OpenAI should stop competing and instead help that company. This was the security guarantee he valued most across the entire transaction.
Near the time the deal was about to be signed, Amodei discovered something: Microsoft obtained a veto right over this clause. What does that mean? Even if, one day, some competitor truly found a better route, Microsoft could block OpenAI’s assistance obligation with a single sentence. The clause still existed on paper, but from the day it was signed, it became scrap paper.
After later leaving OpenAI, Amodei founded Anthropic. Competition between the two companies, at its core, comes down to a fundamental disagreement about “how AI should be developed.”
The missing 20% compute pledge
The report contains a detail that makes your back go cold—about OpenAI’s “super-alignment team.”
In mid-2023, Altman emailed a PhD student at Berkeley who researched “deceptive alignment” (AI acts obedient during tests, but once deployed it does its own thing). Altman said he was extremely concerned about the issue and was considering establishing a $1 billion global research award. The PhD student was highly encouraged, took a leave of absence, and joined OpenAI.
Then Altman changed his mind: no external awards—create a “super-alignment team” internally at the company. The company announced it loudly that it would allocate “20% of existing compute” to this team, with potential value exceeding $1 billion. The wording in the announcement was extremely serious, saying that if the alignment problem couldn’t be solved, AGI might lead to “humans being stripped of power, even human extinction.”
Jan Leike, who was later appointed to lead this team, told reporters that the pledge itself was a very effective “talent retention tool.”
What does reality look like? Four people who worked on this team or were closely connected to it said that the compute actually allocated was only 1% to 2% of the company’s total compute—still the most old and outdated hardware. This team was later disbanded, with the mission left unfinished.
When reporters requested to interview people at OpenAI responsible for “existential safety” research, the company’s PR response was both laughable and heartbreaking: “That’s not an… thing that actually exists.”
Altman himself seemed unbothered. He told reporters that his “intuition doesn’t quite match many of the things in traditional AI safety,” and that OpenAI would still do “safety projects, or at least projects that have something to do with safety.”
A sidelined CFO and the IPO coming
The New Yorker’s report is only half of the bad news of that day. On the same day, The Information broke another major story: OpenAI’s CFO Sarah Friar had a serious split with Altman.
Friar privately told colleagues that she believed OpenAI was not yet ready to go public this year. Two reasons: the amount of procedural and organizational work still needing to be done was too large, and the financial risks from the 5-year $600 billion compute spending Altman had promised were too high. She even wasn’t sure whether OpenAI’s revenue growth could hold up those commitments.
But Altman wanted to push for an IPO in the fourth quarter of this year.
More bizarrely, Friar no longer reported directly to Altman. Starting from August 2025, she switched to reporting to Fidji Simo (OpenAI’s application business CEO). And Simo had just taken sick leave last week for health reasons. You decide what to make of this situation: a company sprinting toward an IPO, with fundamental disagreements between the CEO and CFO; the CFO not reporting to the CEO; and the CFO’s superior also on leave.
Even top executives inside Microsoft couldn’t stand it and said Altman was “distorting facts, reneging, and repeatedly overturning agreements that had already been reached.” One Microsoft executive even said this: “I think there’s a certain probability he will ultimately be remembered by people as a Bernie Madoff or SBF-level fraudster.”
Altman’s “two-faced” image
A former OpenAI board member described two traits in Altman to reporters. This passage may be the harshest character sketch of any person in the entire report.
The board member said Altman has an extremely rare combination of traits: in every in-person exchange, he strongly wants to please the other person and be liked by them. At the same time, toward the consequences that deceiving others might bring, he has an almost sociopathic indifference.
Two such traits appearing in the same person are extremely rare. But for a salesman, this is the most perfect talent.
The report includes a good metaphor: Jobs was known for a “reality distortion field”—he could make the whole world believe in his vision. But even Jobs never told customers, “If you don’t buy my MP3 player, the people you love will die.”
Altman has said something similar about AI.
A CEO’s character problem—why it becomes everyone’s risk
If Altman were just the CEO of an ordinary tech company, these accusations would be at most an entertaining business gossip. But OpenAI is not ordinary.
According to its own claims, it is developing what could be the most powerful technology in human history. It could reshape the global economy and labor market (OpenAI has just released a policy white paper about the issue of AI causing unemployment), and it could also be used to create large-scale biological weapons or launch cyberattacks.
All the safety guardrails have become nothing but a formality. The founders’ nonprofit mission has been replaced by an IPO sprint. Both the former Chief Scientist and the former head of security have concluded that the CEO is “not trustworthy.” Partners compare the CEO to SBF. In this situation, on what basis does this CEO unilaterally decide when to release an AI model that could potentially change the fate of humanity?
Gary Marcus (an AI professor at New York University and a long-time advocate for AI safety) wrote one sentence after reading the report: if a future OpenAI model could create large-scale biological weapons or launch catastrophic cyberattacks, would you really feel comfortable letting Altman alone decide whether to release it?
OpenAI’s response to The New Yorker was concise: “Most of this article is rehashing incidents that have already been reported, using anonymous accounts and selective anecdotes; the sources obviously have personal motives.”
Very much like Altman’s style of response: he doesn’t respond to specific accusations, doesn’t deny the authenticity of the memos, and only questions the motives.
On a nonprofit corpse, a money tree grows
OpenAI’s decade, written as a story outline, goes like this:
A group of idealists worried about AI risk create a mission-driven nonprofit organization. The organization makes extraordinary technological breakthroughs. The breakthroughs attract huge amounts of capital. Capital demands returns. The mission starts to give way. The security team is disbanded. The people who raise doubts are purged. The nonprofit structure is turned into a for-profit entity. A board that once had the power to shut down the company is now filled with the CEO’s allies. The company that once promised to dedicate 20% of compute to protecting human safety now has its public relations team saying, “That’s not a thing that actually exists.”
The story’s protagonist—more than 100 firsthand witnesses all gave him the same label: “not bound by the truth.”
He is preparing to take this company public, with a valuation exceeding $850 billion.
This article’s information is compiled from publicly reported coverage by multiple media outlets including The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and others.