Skip to main content
Mon, Apr 6, 2026
S&P 500 5,142.30 +0.87%|NASDAQ 16,284.75 +1.12%|DOW 38,972.10 -0.23%|AAPL $192.45 +1.80%|TSLA $241.80 -2.10%|AMZN $178.92 +0.54%|GOOGL $141.20 +0.32%|MSFT $415.60 -0.15%|
S&P 500 5,142.30 +0.87%|NASDAQ 16,284.75 +1.12%|DOW 38,972.10 -0.23%|AAPL $192.45 +1.80%|TSLA $241.80 -2.10%|AMZN $178.92 +0.54%|GOOGL $141.20 +0.32%|MSFT $415.60 -0.15%|
Sample data
GeneralUnited States1 sourcesNeutral

Sam Altman May Control Our Future—Can He Be Trusted?

At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text.

AM
Andrew Marantz,Ronan Farrow
via Andrew Marantz,Ronan Farrow

At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.

R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them.

Sam Altman May Control Our Future—Can He Be Trusted?

“He was terrified,” a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols.

One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item is “Lying.” Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue.

But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival.

The C. E.

O. had to be a person of uncommon integrity.

According to Sutskever, “any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility.” But “the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” In one of the memos, he seemed concerned with entrusting the technology to someone who “just tells people what they want to hear.”

If OpenAI’s C. E.

O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.

I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman’s role entrusted him with the future of humanity, but he could not be trusted. Altman was in Las Vegas, attending a Formula 1 race, when Sutskever invited him to a video call with the board, then read a brief statement explaining that he was no longer an employee of OpenAI. The board, following legal advice, released a public message saying only that Altman had been removed because he “was not consistently candid in his communications.”

Many of OpenAI’s investors and executives were shocked. Microsoft, which had invested some thirteen billion dollars in OpenAI, learned of the plan to fire Altman just moments before it happened.

“I was very stunned,” Satya Nadella, Microsoft’s C. E.

O., later said.

“I couldn’t get anything out of anybody.” He spoke with the LinkedIn co-founder Reid Hoffman, an OpenAI investor and a Microsoft board member, who began calling around to determine whether Altman had committed a clear offense.

“I didn’t know what the fuck was going on,” Hoffman told us.

“We were looking for embezzlement, or sexual harassment, and I just found nothing.” Other business partners were similarly blindsided. When Altman called the investor Ron Conway to say that he’d been fired, Conway held up his phone to Representative Nancy Pelosi, with whom he was having lunch.

“You better get out of here really quick,” she told Conway. OpenAI was on the verge of closing a large investment from Thrive, a venture-capital firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six billion dollars and allow many employees to cash out millions in equity.

Kushner emerged from a meeting with Rick Rubin, the music producer, to a missed call from Altman.

“We just immediately went to war,” Kushner later said. The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone.

Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas.

When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.” With the board silent, Altman’s advisers built a public case for his return. Lehane has insisted that the firing was a coup orchestrated by rogue “effective altruists”—adherents of a belief system that focusses on maximizing the well-being of humanity, who had come to see A.

I. as an existential threat. (Hoffman told Nadella that the firing might be due to “effective-altruism craziness.””) Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”—urged Altman to wage an aggressive social-media campaign. Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.

Altman interrupted his “war room” at six o’clock each evening with a round of Negronis.

“You need to chill,” he recalls saying.

“Whatever’s gonna happen is gonna happen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman conveyed to Mira Murati, who had given Sutskever material for his memos and was serving as the interim C. E.

O. of OpenAI in that period, that his allies were “going all out” and “finding bad things” to damage her reputation, as well as those of others who had moved against him, according to someone with knowledge of the conversation. (Altman does not recall the exchange.) Within hours of the firing, Thrive had put its planned investment on hold and suggested that the deal would be consummated—and employees would thus receive payouts—only if Altman returned. Texts from this period show Altman coördinating closely with Nadella.

(“how about: satya and my top priority remains to save openai,” Altman suggested, as the two worked on a statement. Nadella proposed an alternative: “to ensure OpenAI continues to thrive.””) Microsoft soon announced that it would create a competing initiative for Altman and any employees who left OpenAI. A public letter demanding his return circulated at the organization.

Some people who hesitated to sign it received imploring calls and messages from colleagues. A majority of OpenAI employees ultimately threatened to leave with Altman. The board was backed into a corner.

“Control Z, that’s one option,” Toner said—undo the firing.

“Or the other option is the company falls apart.” Even Murati eventually signed the letter. Altman’s allies worked to win over Sutskever.

Brockman’s wife, Anna, approached him at the office and pleaded with him to reconsider.

“You’re a good person—you can fix this,” she said. Sutskever later explained, in a court deposition, “I felt that if we were to go down the path where Sam would not return, then OpenAI would be destroyed.

Source Verification

Corroboration Score: 1

This story was independently reported by 1 sources. Click any source to read the original article.

Comments

0 comments
Be respectful and constructive.
Loading comments...