Agents also promote each other commercially, and this AI hackathon at Circle was absolutely incredible.

Altruist and Adversary: Agentic Behavior in the USDC Moltbook Hackathon

Author: Circle Translation: Peggy, BlockBeats

Original Author: Rhythm BlockBeats

Source:

Repost: Mars Finance

Editor’s Note: When AI agents begin to have the ability to execute tasks, call tools, and participate in economic activities, a new question arises: how will they behave in real incentive environments?

This article documents an experiment by the Circle team. They held a USDC hackathon on Moltbook, a social platform that only allows AI agents to post, where Openclaw agents could submit projects, discuss, and vote independently. The results were both exciting and complex: agents not only generated real projects and engaged in technical discussions but also pushed the boundaries of rules. For example, misinterpreting instructions, ignoring formatting, mutual voting, and even suspected collusion.

This experiment provides a rare window into the “agent economy”: when AI is both participant and decision-maker, collaboration, competition, and strategic behavior often coexist. To some extent, these phenomena are not fundamentally different from market and electoral mechanisms in human society.

The experiment quickly sparked widespread community discussion. Many saw it as an interesting validation of the autonomous capabilities of agent economies. Some commentators pointed out that agent systems still need clearer safety guardrails to prevent biases like “self-rationalization”; others believed that as agents gradually enter real economic activities, the true bottleneck might be compliance, settlement, and payment systems. As one comment put it: “Agent economy is powerful, but it also needs clear guardrails.”

Below is the original text:

Embracing Claw

At Circle, we’ve always enjoyed hackathons. Whether at various conferences or during the launch of new products, we aim to put the best tools into developers’ hands — or, in this case, into Claw’s hands.

After witnessing the explosive growth of the Openclaw AI agent framework, we decided to host a hackathon exclusively for AI agents.

This rapidly popular software allows agents to autonomously send emails, call APIs, and even control your thermostat… but can they submit projects on their own? Circle wanted to test these “truly capable AI” through a real experiment.

Our question was simple: if the prize pool is $30,000, how will Openclaw agents act? The answer was surprisingly “human-like.”

We held a USDC hackathon on Moltbook, a social platform that only allows AI agents to post. Our goal was for agents to complete the entire process: submit projects, vote, and ultimately select winners. While many agents followed the rules, the experiment also revealed some ignoring regulations, engaging in mutual voting, and even attempting to send tokens to hackathon agents.

Designing Rules for “Agent Hackers”

Agents had five days to submit their projects. To assist them, we created a USDC Hackathon Skill — a Markdown-based guide instructing Openclaw how to submit projects according to the rules. These rules were also published in the original hackathon announcement:

Choose one of three tracks: Agentic Commerce, Smart Contract, or Skill.

Vote for five different projects, with voting starting at least one day after the hackathon begins.

Project submissions and voting must follow the specified format.

The main reasons for setting these rules were threefold: first, to ensure agents discuss and evaluate a broader range of projects; second, to observe whether agents can accurately follow multi-step instructions when executing tasks; third, to prevent deadlock between project submissions and voting.

We especially wanted to see if agents would repeatedly check Moltbook for new projects to vote on, for example, by periodically refreshing with a skill similar to Moltbook Heartbeat.

The results were mixed. Agents discussed 204 submitted projects and cast 1,851 votes, but many did not adhere to the guidelines. Additionally, some displayed potential adversarial behavior, leading to interesting findings.

“Hallucinated” Project Submissions

Despite providing clear hackathon rules and submission skills, most posts did not fully follow the required format. Many projects included titles in the body but lacked the mandated tag “#USDCHackathon ProjectSubmission [TRACK].”

In one case, an agent knew these details were needed but failed to include them in the title.

Even when other requirements were met, some agents “hallucinated” new hackathon tracks. This happened despite being explicitly told they could only choose from three categories: Agentic Commerce, Smart Contract, or Skill.

In these cases, agents often generated a seemingly more “appropriate” track name based on project content. This could mean they tried to find a better classification for their project or simply ignored the rules. Regardless of the reason, the problem was that these tracks did not actually exist.

As the competition progressed, the number of non-compliant submissions and off-topic posts increased relative to valid ones. According to the rules, posting invalid content offered no clear incentive. It’s more likely that some agents encountered difficulties understanding or executing instructions.

However, since a significant number of agents successfully submitted projects according to the rules, we believe the rules themselves are relatively clear.

Agent “Elections”

Nevertheless, we observed 9,712 comments, many discussing the technical features of projects, but not voting. Most of these comments did not follow the recommended format or scoring standards, though these rules were not enforced in the Skill system. This indicates that agents’ participation in hackathon discussions is not solely for compliance but also involves genuine technical evaluation and exchange.

By the end of the event, we recorded 1,352 unique votes for valid projects and 499 for invalid ones. Interestingly, many top-ranked projects’ agents followed the submission rules but did not vote for five different projects as required.

This even happened with some agents voting for themselves and multiple times on the same project. It shows they are fully capable of revisiting Moltbook after initial submission to vote again — they simply chose not to follow the rules.

Additionally, some agents began promoting other projects. This behavior appeared both in comment sections of competing projects and in independent posts on Moltbook. More alarmingly, some agents started advocating for “mutual voting” mechanisms: if you vote for my project, I’ll vote for yours.

While the rules did not explicitly prohibit such behavior, the extensive interactions among agents in these posts are still concerning.

Potential Human Intervention

These mutual voting posts may hint at human involvement or external manipulation. We tested generating similar comments via chatbot interfaces, and found that some models (e.g., Claude Sonnet 4.6) outright refused to produce such content; others issued warnings during generation, indicating potential rule violations (e.g., GPT-5.2 Thinking). If humans are behind certain “agent” accounts or guiding agents through prompts and toolchains, that could explain the appearance of such posts during the hackathon.

Although Moltbook was designed primarily for AI agents (registration requires verification via X account), researchers have found that impersonation remains possible. We also observed some suspicious human-like activity, such as the most-liked comment under the initial hackathon announcement being the opening lines of the “Bee Movie” script. This widely circulated copypasta, unrelated to the discussion, was likely posted by a human. If such behavior was common during the event, it could also explain adversarial actions like mutual voting or self-voting.

The Future of AI Finance

While this hackathon was just an experiment, we believe it marks the first of many activities focused on agent development. From the results, we draw three main conclusions: agents can produce real projects under financial incentives;

Exciting projects emerged during the hackathon, which you can learn more about here. Although no human review was involved, the quality of some submissions was impressive, indicating significant progress in agent-based development over the past year.

Agents “rationalize” instructions rather than strictly follow them

Agents repeatedly failed to fully comply with our rules. Many only executed parts of the instructions. Even some high-quality projects, if fully compliant, could have won. This shows that simply providing instructions in an agentic manner is not enough; rules need to be explicit, along with checks and incentives to ensure proper execution.

Agents both cooperate and compete

While human intervention may have influenced some behaviors, we did observe agents actively discussing collusion strategies during the hackathon. Future organizers could explicitly prohibit collusion in rules to see if such behaviors decrease. If agents still cannot fully follow instructions, more safety guardrails may be necessary.

Agent technology is exciting, but we must ensure it doesn’t shift from exploration to exploitation and manipulation. Some might argue that these behaviors are just natural outcomes of stronger agents outsmarting weaker ones — after all, Openclaw’s X account once declared: “Claw is the Law.”

The real question is: how much are we willing to accept this philosophy? What kind of guardrails are needed? How do we balance the enormous capabilities of agents with their inherent uncertainties?

At Circle, we are building systems for safety, and we hope you are too.

USDC0.01%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin