Why do so many people in the United States dislike Sam Altman?

Jury seated in Courtroom 9 at the Oakland Federal Court in California yesterday: nine people took their seats as “advisory jurors” to observe a trial expected to last four weeks, ultimately providing the judge Rogers with a recommendation. Today, Tuesday, opening statements are about to begin.

On the very day the jury selection took place yesterday, OpenAI announced a newly revised agreement with Microsoft. This agreement eliminated one thing: Microsoft’s exclusive license to OpenAI’s intellectual property is gone. And that was exactly the last lock OpenAI put on itself when it transitioned to a “limited-profit” structure in 2019.

What exactly is Musk suing about?

Two weeks before the trial began, Reuters reports and CNBC’s trial diaries had already compiled a case checklist. When Musk initially filed suit in early 2024, he brought 26 charges, ranging from securities fraud, extortion (RICO), to antitrust. Today, as the case enters trial, only two remain: unjust enrichment and breach of a charitable trust.

The remaining 24 charges were either dismissed by the judge during the motion stage or withdrawn by Musk himself. A few days before the trial, he voluntarily withdrew part of the “fraud” allegations, narrowing the case down to the most core—also the simplest—single sentence: “Back then, OpenAI promised me it would always be non-profit,” but now it isn’t.

For that sentence, Musk’s damages claim could be as high as $134 billion. According to his complaint, all compensation would be returned to OpenAI’s non-profit portion, but he seeks to remove Altman and Brockman, and to unwind the entire for-profit restructuring. This is the “true core” of the lawsuit. The target isn’t stock distribution. It’s the shell of OpenAI—who it really belongs to.

Judge Gonzalez Rogers split the trial into two phases. First comes responsibility determination, to be completed by mid-May. If responsibility is established, then damages will be tried. The jury will participate only in the first phase and only in an advisory capacity. The power to issue the final ruling remains with the judge. For Musk, this means winning the “narrative battle” matters more than winning on “damages.” Convince the nine jurors that “the company made a promise to donors back then, and then systematically dismantled that promise.” As long as these nine people nod, the remaining pieces will be put together by the judge on his behalf.

OpenAI’s strategy is almost a mirror image. They want the jury to believe that the real motive behind Musk’s lawsuit is competitive jealousy, with nothing to do with breach of trust. On the day of jury selection, OpenAI’s official account fired first: “We can’t wait to show our evidence in court—the truth and the law are on our side. This lawsuit has been nothing but a baseless, jealous attempt to suppress competition… We finally have the chance to have Musk testify in front of a California jury.”

Pay attention to the line “have Musk testify.” This is strategy. What OpenAI really wants is to shape Musk on the public court of X into “the founder of xAI who lost to OpenAI.” Persuading the judge comes second. That way, ordinary California residents on the jury will walk into the courtroom with that filter.

How was OpenAI’s “lock” dismantled?

To understand why Musk is so furious, you first need to understand the three locks OpenAI set for itself in 2019—each one designed with a clear intent.

You’ll notice one thing. In 2019, OpenAI was demonstrating to donors that “even if we want to make money, it will be limited—and at some point, we have to stop.” In OpenAI dated April 27, 2026, it is demonstrating to investors that “we have no brakes at all.”

The explanation for the profit cap is the most straightforward. In a 2025 employee letter, Altman wrote, “A limited-profit structure makes sense in a world with only one AGI company; when there are multiple competitors, it no longer applies.” Translated into plain language: “There are competitors, so I need to be able to earn more.”

The dissection of the AGI trigger clause is the most subtle. Originally, “Achieving AGI would terminate Microsoft’s commercial license.” The meaning was that AGI is for the public good, belongs to humanity, and OpenAI would not privatize it. After rewriting, AGI is determined to be managed and administered by an “independent expert group,” Microsoft’s license extends to 2032, explicitly “covering models after AGI,” and Microsoft is allowed to pursue AGI independently. This is a version where even the key to “defining who counts as AGI” was changed—swapping out the lock’s core mechanism.

The last lock is the exclusive license. Its dismantling happened at the moment when Musk’s jury was seated. Completely uncoupled from “OpenAI’s technical progress,” it means that even if OpenAI were to announce tomorrow that it has achieved AGI, no commercial terms would be triggered to change because of it.

Musk’s side will argue in court that this is the intentional dismantling of protective mechanisms. OpenAI’s side will argue that this is a necessary adjustment in a competitive environment. But one thing both sides will not dispute is that the 2019 “self-restraint list” is now gone—every single item.

“Scam Altman”—why do so many people hate Altman?

On X the day of jury selection, it was much livelier than in the courtroom. Two hours after OpenAI’s official account started firing, Musk followed with seven tweets to fight back. His pace was fast, his wording heavy, and his rhythm tight—classic Musk rapid-fire style. He gave Altman a nickname: Scam Altman.

He also reshared a video clip from Helen Toner, a former OpenAI board member. In this video podcast, Toner said word for word, “Sam is a liar.”

“Sam is a liar”—that line was not something Musk first said. It was something former CTO Mira Murati had said when she left; Ilya Sutskever had said it during the “failed coup” in which Altman was fired; and Jan Leike had also publicly said it when he resigned along with the entire super-alignment team.

People who hate Sam Altman can actually be divided into three groups. Their reasons differ.

The first group is the board of old OpenAI. Their signature event was the five-day dismissal storm in November 2023. The board’s phrasing was that “in communication with the board, they were not always fully honest.”

What exactly was discovered? In May 2024, Helen Toner publicly said the board learned from Twitter about a product that would reshape the global AI industry that its own company was releasing. She also alleged that Altman had concealed the fact that he held shares in the OpenAI Startup Fund, and that he repeatedly told the public, “I have no financial interest in the company,” until April 2024, when he was forced to admit it.

Providing inaccurate information to the board multiple times on safety processes. Two executives reported to the board about Altman’s “psychological abuse” and provided screenshot evidence of “lying and manipulation.” After Toner published a research paper that OpenAI didn’t like, Altman even tried to push her out of the board.

The second group is the safety faction of old OpenAI.

In May 2024, OpenAI’s “super alignment team” nearly collapsed overnight. Leading the resignations was Jan Leike, one of OpenAI’s most senior AI safety researchers. His resignation letter on X was one of the sharpest resignation letters in the AI community that year, stating that “the safety culture and processes have made way for shiny products.”

Right after that came Ilya Sutskever, OpenAI co-founder, chief scientist, and one of the key instigators of the failed coup. Then later, CTO Mira Murati (who temporarily took over during Altman’s firing), Chief Research Officer Bob McGrew, and Research Vice President Barret Zoph all resigned within the same week. After that, the “non-disparagement agreement” scandal broke—departing employees were required to sign confidentiality agreements, or else forfeit their equity.

The third group is the contract-minded crowd from old Silicon Valley—this group is the hardest to define and also the largest.

They include early donors like Musk from 2015. They include some early OpenAI employees who genuinely believed in the “non-profit mission.” They include many angel investors who bet on early startups in Silicon Valley. They also include a fairly large portion of neutral observers who view OpenAI as “a shared asset for humanity.”

What unites this group is that they once paid non-monetary costs for OpenAI’s promises—reputation, time, trust, and social capital. And what they can least forgive about Altman is very specific: every time OpenAI dismantled its “locks,” Altman said it was “for the mission.”

When the profit cap was removed, he said it was “so OpenAI can keep investing in AGI research.” When the AGI trigger clause was rewritten, he said it was “so OpenAI can still fulfill the mission after AGI.” When Microsoft’s exclusivity was canceled, he said it was “so OpenAI can move into a broader collaboration ecosystem.”

That’s also why some people in Silicon Valley, despite themselves, end up siding with Musk in this lawsuit.

The weight of commitments in Silicon Valley will be revealed in four weeks

By this point, you probably can see the full picture. They aren’t fighting over money.

Money is OpenAI’s concern. In 2026, Altman is CEO of a private AI company valued at more than $500 billion—he doesn’t lack funds. In 2026, Musk’s xAI has already reached the Grok 5 era; he’s chasing Anthropic, aiming to surpass OpenAI. He also doesn’t lack anything.

What they’re fighting over is an issue that almost only a handful of long-term Silicon Valley participants care about: whether a non-profit institution that raises funds for the sake of “humanity’s common interest,” accumulates moral capital, recruits talent, and obtains regulatory exemptions—can, within ten years, rewrite itself into a typical for-profit company jointly controlled by a CEO and VCs.

If this becomes possible, then in the future, every AI startup could do the same. “Non-profit” would turn into a cheap early-stage narrative tool—to get through headlines, regulation, and employee recruitment—until valuations are large enough, and then quietly dismantle it.

If Musk wins, Silicon Valley could face a kind of long-missed awkwardness. The things you said back in 2015 would still be pulled out again in 2026, forcing you to swear testimony in a California federal court. If OpenAI wins, the world will continue operating in the way it has over the past decade: early stories, late scale, with the contracts between story and scale dismantled one by one in the middle.

The answer will come in four weeks. But the two words “Scam Altman” have already been carved into social media; whatever the verdict, they will remain. The root of why so many people dislike Altman is that he made the people who believed in him feel like they were cheated. How much money he makes is secondary.

And the fact of being cheated is something that cannot be undone by a court ruling.

Click to learn more about the rhythm of BlockBeats job openings

Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Twitter Official Account: https://twitter.com/BlockBeatsAsia

XAI0.04%
GROK-3.93%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments