Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
1.5 million people reportedly left ChatGPT last week. Yesterday OpenAI shipped GPT-5.4.
Let's talk about what's actually happening here.
That 1.5 million number comes from QuitGPT, a boycott campaign that launched after OpenAI signed a deal with the Pentagon. It's a pledge count, not confirmed cancellations. But the hard data backs up the trend. Sensor Tower confirmed ChatGPT uninstalls spiked 295% in a single day. Claude hit #1 on the US App Store for the first time ever. 1-star reviews for ChatGPT jumped 775%. For the first time in history, Claude's daily US downloads beat ChatGPT's.
That's the backdrop. Now look at the timeline.
February 27: Pentagon labels Anthropic a supply chain risk after Anthropic refuses to drop its red lines on mass surveillance and autonomous weapons. Hours later, OpenAI announces its own Pentagon deal. Backlash is instant.
March 1: Altman posts on X calling his own deal "opportunistic and sloppy." Says they "shouldn't have rushed." Starts rewriting contract language at midnight.
March 3: OpenAI employees tell CNN they "really respect" Anthropic for standing up to the Pentagon. QuitGPT crosses 1.5 million. Protests outside OpenAI HQ in San Francisco.
March 5: GPT-5.4 drops.
Now look at what GPT-5.4 actually does.
Native computer use. First time OpenAI has shipped a model that can click a mouse, type keyboard commands, navigate browsers, and operate across applications autonomously. 1 million token context window. A tool search system that cuts token usage by 47%. 33% fewer hallucinations than GPT-5.2. Agentic workflows that can plan, execute, and verify tasks across long sessions.
Sound familiar?
Because three weeks before all of this, on February 15, OpenAI hired Peter Steinberger. The Austrian developer who built OpenClaw from his living room in Vienna. 180k GitHub stars. The fastest-growing project in GitHub history. Zuckerberg DMed him on WhatsApp. Nadella reached out. Altman called personally.
Steinberger's blog post said he wanted to "build an agent even my mum can use." Three weeks later OpenAI ships a model whose entire pitch is agents that do things on your computer.
I'm not saying they built GPT-5.4 in three weeks because of one hire. Models take months. But the feature prioritization, the positioning, the "first model with native computer use" framing. That has Steinberger's fingerprints on it. Fortune literally listed OpenClaw alongside Perplexity Computer and Microsoft Copilot Tasks as the competitive field GPT-5.4 is targeting.
And there's a third layer most people missed.
The week before the Pentagon disaster, OpenAI closed a $110 billion fundraise. The biggest in tech history. That should have been the headline for weeks. Instead nobody talked about it. The entire news cycle was Pete Hegseth, autonomous weapons, and Sam Altman doing damage control.
So GPT-5.4 is doing triple duty right now.
One: narrative reset. Move the conversation from "OpenAI sold out to the Pentagon" back to "OpenAI ships the best model." Two: competitive response. Claude has been eating their lunch. Anthropic's "we said no" is incredible branding and it's working. Three: agent land grab. OpenClaw proved the market wants AI that does things, not AI that talks about things. GPT-5.4 is OpenAI planting its flag in that market before anyone else can.
The model itself is genuinely impressive. The tool search architecture alone is smart engineering. 47% token reduction on the MCP Atlas benchmark with 36 servers enabled. Same accuracy. That's not marketing. That changes the economics of running agents at scale. The computer use capabilities are real. The coding benchmarks are strong.
But here's the thing.
The people who uninstalled ChatGPT last week didn't leave because the model was bad. GPT-5.2 was fine. They left because they watched OpenAI swoop in on a Pentagon deal hours after Anthropic said no to surveillance and autonomous weapons. They left because Altman publicly agreed with Anthropic's red lines and then signed a deal anyway. They left because OpenAI's own employees were posting on X that the deal wasn't worth it.
You don't fix that with benchmarks.
OpenAI has two problems right now. One is technical. Can they build the best model? GPT-5.4 says yes. Probably. The other is trust. Can people believe the company behind it? That one is still wide open.
And the wildest part? Anthropic is already back at the negotiating table with the Pentagon. The Financial Times reported yesterday that Dario Amodei has resumed talks. So the company that gained all this goodwill by saying no might end up saying yes anyway. Just with better terms.
Which would mean OpenAI took the hit, rewrote its own contract under pressure, and Anthropic walks in after and gets a better deal. And GPT-5.4 shipped in the middle of all of it.
This whole situation is chaos. But it's the kind of chaos that produces the best products and the biggest shifts. Pressure creates momentum. Whether OpenAI can convert that momentum into trust is the only question that matters now.
Strongest model they've ever shipped. Worst possible week to ship it.
Or maybe the best possible week. Depends on what you think this is.