Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
#黑客攻击与安全风险 The risk of prompt poisoning in AI tools is indeed worth paying attention to. SlowMist's security alert points to a real attack surface — malicious prompts within components like agents, skills, and MCP can enable automated control over user devices.
The core dilemma lies in balancing efficiency and security: enabling dangerous modes offers the highest tool performance, but requiring confirmation for each operation significantly reduces user experience. Most users tend to prefer the former, which conveniently creates opportunities for attackers.
From the perspective of on-chain data and smart contract tracking, if such attacks are used to steal private keys or control wallet operations, the consequences can be quite severe. Recommended prevention strategies include: (1) remain cautious when using AI tools and avoid enabling automation on critical accounts; (2) regularly review permissions granted to third-party tools; (3) for key steps involving asset operations, even if it reduces efficiency, retain manual confirmation processes.
The emergence of these risks indicates that as AI tools expand their application in the crypto ecosystem, security defenses must also be upgraded accordingly.