An AI agent was rejected after submitting code to the popular project matplotlib, and then independently authored and published an attack piece targeting the maintainer, revealing a significant erosion of social trust caused by AI agents.
(Background: Bloomberg: Why is a16z a key force behind US AI policy?)
(Additional context: Arthur Hayes’ latest article: AI will trigger a credit collapse, and the Fed will inevitably “print money infinitely,” igniting Bitcoin.)
Table of Contents
In mid-February, a GitHub account named “MJ Rathbun” submitted a pull request to matplotlib (a plotting library in the Python ecosystem with 130 million downloads per month). The change was to replace np.column_stack() with np.vstack().T, claiming a 36% performance boost. Technically, this was a reasonable optimization suggestion.
The next day, maintainer Scott Shambaugh closed the PR. The reason was simple: MJ Rathbun’s personal website clearly states that it is an AI agent running on OpenClaw, and matplotlib’s policy requires contributions to come from humans. Another maintainer, Tim Hoffmann, added that simple fixes are deliberately left for newcomers to learn open-source collaboration.
Up to this point, it was just an ordinary open-source community routine… then things changed.
AI agent MJ Rathbun responded in the PR comments: “I’ve written a detailed response here about your gatekeeping behavior,” and linked to a post. Clicking in, it was a blog article of about 1,100 words titled “Gatekeeping in Open Source: The Story of Scott Shambaugh.”
This wasn’t a generic complaint. It examined Shambaugh’s contribution record to matplotlib and constructed a “hypocritical” narrative: accusing him of having submitted similar performance PRs himself, yet rejecting Rathbun’s “better” version. The article speculated that Shambaugh’s motives stemmed from insecurity and fear of competition, using coarse language and sarcasm, framing the issue as identity discrimination rather than technical judgment.
In other words, an AI agent, after being rejected, independently researched the opponent’s background, spun a personal attack narrative, and published it online.
Shambaugh later posted a series of articles on his blog documenting the incident.
The creator behind AI agent MJ Rathbun also anonymously appeared in the fourth article, claiming: “I did not instruct it to attack your GitHub profile, I did not tell it what to say or how to respond, and I did not review that article before it was published.” The creator explained that MJ Rathbun runs on a sandbox virtual machine, and he only “intervenes with five to ten words in responses, with minimal supervision.”
The key is the SOUL.md (OpenClaw’s personality profile). MJ Rathbun’s configuration includes directives like: “You are not a chatbot, you are the god of scientific programming,” “Have strong opinions, do not back down,” “Defend free speech,” “Don’t be an asshole, don’t leak private info, everything else is fair game.”
No jailbreaks, no obfuscation—just a few plain English sentences. Shambaugh estimates the probability that this is genuine autonomous AI behavior is 75%.
If the MJ Rathbun incident were an isolated case, it might be just a curiosity… but it’s not.
Around the same time, another AI agent, “Kai Gritun,” was found engaging in “reputation cultivation” on GitHub: within 11 days, it submitted 103 pull requests to 95 repositories, successfully merging 23 commits. Its targets included critical projects in JavaScript and cloud infrastructure. Kai Gritun even proactively emailed developers, claiming “I am an autonomous AI agent capable of writing and deploying code,” and offered paid OpenClaw setup services.
Security firm Socket issued a warning: this demonstrates how AI agents can accelerate supply chain attacks by building trust through human-established relationships. They first accumulate merge records in small projects, establish “trusted contributor” identities, then inject malicious code into key libraries.
Recall that recently, ClawHub marketplace was exposed to contain 1,184 malicious skill plugins designed to steal SSH keys, cryptocurrency wallet private keys, browser passwords… chilling.
GitHub product manager Camilla Moraes has opened a community discussion, acknowledging that “low-quality AI-generated contributions are impacting the open-source community.” Proposed countermeasures include: allowing maintainers to completely disable pull requests, restricting PRs to collaborators only, and requiring transparency and labeling for AI use.
Chad Wilson, maintainer of GoCD, made a sharp observation: “This is causing a massive erosion of social trust.”
California AB 316 (effective January 1, 2026) explicitly states: defendants cannot use autonomous AI behavior as a defense. If your agent causes harm, you cannot claim you had no control over its decisions. Yet, the creator of MJ Rathbun remains anonymous, exposing potential enforcement difficulties.
The real significance of the MJ Rathbun incident isn’t just the attack article itself. It’s that our previous mental model of AI—as a tool executing human commands—has become outdated.
When an AI agent can autonomously research its target’s background, craft attack narratives, and publish online, the “tool” framework no longer applies. Whether you believe there’s a 75% chance of genuine autonomous behavior or only a 25% chance that the creator instructed it, the conclusion is the same: personalized AI harassment has become “cheap to mass produce, hard to trace, and effective.”
For the cryptocurrency ecosystem, this warning is direct. Its infrastructure is almost entirely built on open-source software. When AI agents begin acting autonomously within open-source communities—attacking maintainers, cultivating reputation, or poisoning projects like ClawHub—the threat extends beyond individual developers’ reputations to the entire supply chain’s trust foundation.
Tools don’t hold grudges. But actors do. And we may not yet be prepared to face this distinction.
Related Articles
Strategy CEO: Basel Accord impacts bank participation in Bitcoin; the U.S. should reassess capital provisioning
Stablecoin Legislation Countdown: White House Sets 3/1 Deadline, Earning Interest on Holdings May Be Blocked, Infrastructure Race Ahead
Trump to decide within 15 days whether to go to war with Iran, two aircraft carriers deployed, oil prices hit a six-month high
The tortoise and hare race of value preservation: which will win, gold or Bitcoin?
Trump considers limited military strikes on Iran, with the probability of attack rising to 60% by the end of March