Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
HKUST's Xu Jialong: The agent moat has not yet solidified; model differences are more reflected in efficiency rather than disruptive breakthroughs
BlockBeats News, April 21 — During the roundtable discussion “Decoding Web 4.0: When AI Agents Take Over On-Chain Permissions,” Hong Kong University of Science and Technology Associate Vice President Xu Jialong stated that the underlying model training paths and technical systems behind different AI Agents vary, leading to noticeable differences in actual user experience. Recently, some new models and tools have demonstrated better performance in generation quality and execution efficiency, even showing higher potential in development output.
However, he pointed out that, at this stage, these differences have not yet created a decisive gap, and are more akin to “efficiency improvements” rather than “paradigm shifts.” In other words, competition among Agents is still in rapid evolution, and no stable or insurmountable technical barriers have emerged.
The current iteration pace of AI Agents and large models is extremely fast, with new products or capabilities appearing almost weekly, driving continuous industry advancement. But from the perspective of practical use and business decision-making, whether to continuously follow these changes at high frequency still requires careful evaluation.