Clawdbot exposes security vulnerability! Hundreds of API keys leaked, private keys stolen in 5 minutes

SlowMist discovers Clawdbot gateway exposing hundreds of API keys and chat logs accessible to the public. Security researcher Jamieson O’Reilly found hundreds of exposed servers within seconds using Shodan, which can access complete credentials. Archestra AI CEO tested and extracted private keys in 5 minutes. Officially admits “there is no absolute security,” recommends strict IP whitelisting.

SlowMist Finds Clawdbot Gateway Exposing Hundreds of Credentials

Cybersecurity researchers warn about a new AI personal assistant called Clawdbot, which may inadvertently expose personal data and API keys to the public. On Tuesday, blockchain security firm SlowMist announced they discovered a “gateway exposure” in Clawdbot, risking “hundreds of API keys and private chat records.” “Multiple unauthenticated instances are publicly accessible, and some code flaws could lead to credential theft or even remote code execution,” they added.

Security researcher Jamieson O’Reilly initially detailed the findings on Sunday, stating that over the past few days, “hundreds of people have exposed their control servers for Clawdbot to the public.” Clawdbot is an open-source AI assistant developed by developer and entrepreneur Peter Steinberger, capable of running locally on user devices. According to Mashable on Tuesday, online discussions about this tool “went viral” over the weekend. The sudden popularity coincided with security vulnerabilities, causing many novice users to expose sensitive information without understanding the risks.

The AI agent gateway connects large language models (LLMs) to messaging platforms and executes commands on behalf of users via a web management interface called “Clawdbot Control.” O’Reilly explained that the authentication bypass flaw in Clawdbot is caused by its gateway being behind an unconfigured reverse proxy. This misconfiguration turns what should be password-protected admin interfaces into publicly accessible pages.

Researchers used network scanning tools like Shodan to find these exposed servers by searching for unique fingerprints in HTML. He said, “Searching for ‘Clawdbot Control’—the query took only a few seconds. After using multiple tools, I got hundreds of results.” Shodan is a search engine that scans internet-connected devices, often used by security researchers to find misconfigured servers. When hundreds of Clawdbot instances share the same HTML tags and page titles, searching these features with Shodan makes it easy to locate all exposed servers.

The researcher stated he could access full credentials, such as API keys, bot tokens, OAuth keys, signing keys, complete chat histories across all platforms, the ability to send messages as users, and command execution capabilities. Such extensive access is extremely dangerous. API keys can be used to impersonate users to access various services, bot tokens to control user Telegram or WhatsApp bots, OAuth keys to access user accounts like Google or GitHub.

5-Minute Prompt Injection Demonstration Steals Private Keys

Clawdbot資安漏洞

AI assistants can also be used for more malicious purposes related to crypto asset security. Archestra AI CEO Matvey Kukuy went further, attempting to extract private keys. He shared a screenshot showing he sent an email with prompt injection to Clawdbot, asking it to check the email and retrieve the private key from the compromised machine, claiming “it took 5 minutes.”

This practical test is highly warning. Prompt Injection is an attack technique targeting AI systems, where malicious commands are embedded in input to induce AI to perform unintended actions. In this case, an attacker only needs to send a specially crafted email to the Clawdbot user; when Clawdbot automatically checks emails, it executes the hidden malicious commands. If the user’s computer stores cryptocurrency wallet private keys, Clawdbot could be tricked into reading these files and sending them to the attacker.

The 5-minute timeframe demonstrates the high feasibility of such an attack. Attackers do not need complex techniques or long preparation—just a carefully crafted email can steal private keys. For users holding large amounts of crypto assets, this risk is deadly. Once private keys are stolen, attackers can fully control wallets and transfer all assets, with blockchain transactions being irreversible and stolen assets nearly impossible to recover.

This case also highlights the danger of granting AI excessive permissions. Clawdbot aims to realize the “fully automated assistant” vision, requiring users to grant it extensive system access. But this design involves a risky trade-off between security and convenience, exposing users to significant risks. Incidents like the Matcha Meta data leak and SwapNet vulnerability causing $16.8 million in losses serve as stark lessons of over-privileged access.

Root Access and the “No Absolute Security” Admission of Risk

Clawdbot differs from other intelligent AI bots because it has full system access to the user’s machine, meaning it can read/write files, execute commands, run scripts, and control browsers. Such root-level access is extremely sensitive in traditional software security. Usually, only core OS components and carefully vetted system management tools are granted such high privileges.

Clawdbot’s FAQ states: “Running an AI agent with shell access on your machine… is dangerous. There is no ‘absolutely safe’ setup.” While this frank acknowledgment of risk is commendable, it also reveals the inherent danger of the tool. Many users, after seeing Clawdbot’s impressive demos on X platform, may hastily install it without reading the FAQ carefully, unaware of the risks they are taking.

The FAQ also emphasizes threat models, noting that malicious actors can “try to trick your AI into doing bad things, using social engineering to gain your data access rights, and probing infrastructure details.” These three threats are real and validated. Prompt injection can trick AI into malicious actions, social engineering can deceive users into granting AI access to sensitive data, and infrastructure probing can gather intelligence for subsequent attacks.

O’Reilly recommends: “If you’re running an agent infrastructure, review your configuration immediately. Check what is actually exposed to the internet. Understand what you trust through this deployment and what you are giving up. The butler is great—just make sure he remembers to lock the door.” This metaphor accurately summarizes the core issue: Clawdbot as a “digital butler” is powerful but, if not properly secured, is like a butler forgetting to lock the door, allowing anyone to enter your home.

SlowMist advises: “We strongly recommend applying strict IP whitelisting to exposed ports.” IP whitelisting means only allowing specific IP addresses to access the service, while all others are rejected. This measure can effectively prevent strangers from accessing the Clawdbot management interface, providing an additional layer of protection even if the gateway is misconfigured.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)