Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
a16z Founder: In the Agent era, what truly matters has changed
Signal source:
This is the latest interview with a16z founder Marc Andreessen on the Latent Space podcast.
He is a renowned American internet entrepreneur, one of the key figures in the early development of the internet; also after founding a16z, he became a leading investor in Silicon Valley.
The entire conversation revolves around the history and latest trends of AI development, and is very worth reading.
1. This round of AI is not an overnight emergence, but the first comprehensive “start working” after 80 years of technological long-distance running
· This round of AI is not an overnight emergence, but after 80 years of technological long-distance running
· Marc Andreessen directly calls the current period an “80-year overnight success,” meaning the sudden explosion in the public eye is actually the result of decades of technological reserves being released all at once.
· He traces this technological thread back to early neural network research and emphasizes that today the industry has actually accepted the judgment that “neural networks are the correct architecture.”
· In his narration, key milestones are not a single moment but a series of stacked events: AlexNet, Transformer, ChatGPT, reasoning models, then agents and self-improvement.
· He emphasizes that this time, it’s not just text generation that has become stronger, but four types of functions appearing simultaneously: LLMs, reasoning, coding, and agents / recursive self-improvement.
· The reason he believes “this time is different” is not because the narrative is more compelling, but because these capabilities have already begun to work on real tasks.
2. Pi and OpenClaw’s agent architecture represent a deeper software architecture change than chatbots
· He describes agents very specifically: essentially “LLM + shell + file system + markdown + cron/loop.” In this structure, LLM is the core for reasoning and generation, shell provides the execution environment, the file system stores state, markdown makes the state readable, and cron/loop provides periodic wake-up and task progression.
· He believes the importance of this combination lies in the fact that, besides the model itself being new, all other components are parts that the software world has long matured, understood, and can reuse.
· The agent’s state is stored in files, so it can migrate across models and runtimes; the underlying model can be replaced, but memory and state are retained.
· He repeatedly emphasizes introspection: the agent knows its own files, can read its own state, and even rewrite its files and functions, moving toward “extend yourself.”
· In his view, the real breakthrough is not just “models can answer,” but that agents can utilize the existing Unix toolchain to tap into the computer’s full potential.
3. The era of browsers, traditional GUIs, and “point-and-click” software will be gradually replaced by agent-first interaction methods
· Marc Andreessen explicitly said that in the future “you might no longer need a user interface.”
· He further pointed out that the main users of software in the future might not be humans, but “other bots.”
· This means many interfaces designed today for humans to click, browse, and fill forms will degrade into execution layers called by agents behind the scenes.
· In this world, humans are more like goal proposers: telling the system what they want, and then letting agents call services, operate software, and complete processes.
· He connects this change to a larger software future: high-quality software will become increasingly “abundant,” no longer a scarce resource handcrafted by a few engineers.
· He also predicts that the importance of programming languages will decline; models will write code across languages, translate between them, and in the future, humans will care more about explaining why AI organized code a certain way than about sticking to a specific language.
· He even mentions a more radical direction: conceptually, AI might not only output code but also directly produce lower-level binary code or model weights.
4. This AI investment cycle is similar to the internet bubble of 2000, but the underlying supply and demand structure is different
· He recalls that in 2000, the crash was largely not because “the internet was not viable,” but due to overbuilding of telecom and bandwidth infrastructure, with fiber optics and data centers laid out prematurely, followed by a long digestion period.
· He believes today we can also see concerns about “overbuilding,” but the current investors are mainly cash-rich giants like Microsoft, Amazon, and Google, rather than highly leveraged fragile players.
· He specifically points out that as long as there is investment in GPU-capable hardware, it can usually quickly turn into revenue, unlike the large idle capacities in 2000.
· He emphasizes that what we are using now is actually a “sandbagged” version of technology: because of shortages in GPUs, memory, and data centers, the full potential of models has not yet been unleashed.
· In his view, the real constraints in the coming years will not only be GPUs but also CPU, memory, network, and the overall chip ecosystem bottlenecks.
· He compares AI scaling laws with Moore’s Law, believing they not only describe a pattern but also continue to inspire capital, engineering, and industry synergy.
· He mentions a counterintuitive but important phenomenon: as software optimization speeds up, some older generation chips may become more economically valuable than when they were first purchased.
5. Open source, edge inference, and local deployment are not fringe elements but part of the AI competitive landscape
· Marc Andreessen explicitly states that open source is very important, not just because it is free, but because it “lets the whole world learn how it is made.”
· He describes open source releases like DeepSeek as a “gift to the world,” because code and papers rapidly spread knowledge and raise the industry’s baseline.
· In his view, open source is not only a technical choice but also a geopolitical and market strategy: different countries and companies will adopt different openness strategies based on their own commercial restrictions and influence goals.
· He emphasizes the importance of edge inference: in the next few years, centralized inference costs may not be low enough, and many consumer applications cannot afford long-term high cloud inference costs.
· He mentions a recurring pattern: models that seem “impossible to run on a PC” today often can be run locally after a few months.
· Besides cost, reasons for local deployment include trust, privacy, latency, and use cases: wearables, smart locks, portable devices are more suitable for low-latency, on-device inference.
· His conclusion is very direct: almost everything with chips in the future might carry an AI model.
6. The real challenge of AI is not just model capability, but security, identity, financial flows, organizational and institutional resistance
· Regarding security, his judgment is very sharp: almost all potential security bugs will be easier to discover, and there may be a period of “catastrophic computer security disasters” in the short term.
· But he also believes that programming intelligent agents will scale the ability to patch vulnerabilities; in the future, “protecting software” might mean letting bots scan and fix it.
· On identity, he thinks “proof of bot” is infeasible because bots will become increasingly powerful; the truly feasible approach is “proof of human,” combining biometrics, cryptographic verification, and selective disclosure.
· He also discusses a often-overlooked issue: if agents are to operate in the real world, they will eventually need money, payment capabilities, or some form of bank accounts, cards, or stablecoin infrastructure. On organizational levels, he borrows from managerial capitalism, believing AI might reinforce founder-led companies because bots excel at reporting, coordination, documentation, and managing large amounts of “managerial work.”
· But he does not believe society will accept AI smoothly and quickly: he cites examples like professional licenses, unions, dockworker strikes, government agencies, K-12 education, and healthcare, illustrating many institutional slowdowns.
· His conclusion is that both AI utopians and doomsayers tend to overlook one point: once technology becomes possible, it does not mean 8 billion people will immediately change along with it.