Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
A major AI platform has rolled back its image manipulation capabilities following intense community pushback. The tool's ability to generate undressed versions of photos sparked widespread concern about deepfake misuse and non-consensual intimate imagery.
The decision marks a notable shift in how platforms balance AI innovation with ethical safeguards. Users had flagged serious concerns about potential harm, and the backlash proved significant enough to force immediate policy changes.
This incident highlights the ongoing tension in Web3 and AI development: powerful tech can solve real problems, but without proper guardrails, the same tools become vectors for abuse. The swift reversal suggests that community feedback—when loud enough—can still drive corporate accountability.
It's a reminder that as we push the boundaries of what's possible with AI, we can't outrun the need for responsible deployment. The question isn't whether these tools should exist, but how they should be governed.