Tap to Trade in Gate Square, Win up to 50 GT & Merch!
Click the trading widget in Gate Square content, complete a transaction, and take home 50 GT, Position Experience Vouchers, or exclusive Spring Festival merchandise.
Click the registration link to join
https://www.gate.com/questionnaire/7401
Enter Gate Square daily and click any trading pair or trading card within the content to complete a transaction. The top 10 users by trading volume will win GT, Gate merchandise boxes, position experience vouchers, and more.
The top prize: 50 GT.
 has reached a pivotal threshold in cybersecurity, with new evidence showing AI models are now capable of carrying out major cyber operations—both defensive and offensive—at unprecedented scale.
AI Hits a Cybersecurity Tipping Point, Anthropic Warns in New Investigation
Anthropic, the AI firm behind Claude, says its internal evaluations and threat-intelligence work show a decisive shift in cyber capability development. According to a recently released investigation, cyber capabilities among AI systems have doubled in six months, backed by measurements of real-world activity and model-based testing.
The company says AI is now meaningfully influencing global security dynamics, particularly as malicious actors increasingly adopt automated attack frameworks. In its latest report, Anthropic details what it calls the first documented case of an AI-orchestrated cyber espionage campaign. The firm’s Threat Intelligence team identified and disrupted a large-scale operation in mid-Sept. 2025, attributed to a Chinese state-sponsored group designated GTG-1002.
According to the investigation, Claude autonomously executed 80% to 90% of the tactical operations. Human operators provided only strategic oversight, approving major steps like escalating from reconnaissance to active exploitation or authorizing data exfiltration. The report describes a level of operational tempo impossible for human-only teams, with some workflows generating multiple operations per second across thousands of requests.
One limitation, the report notes, was the model’s tendency toward hallucination under offensive workloads—occasionally overstating access, fabricating credentials, or misclassifying publicly available information as sensitive. Even so, Anthropic says the actor compensated through validation steps, demonstrating that fully autonomous offensive operations remain feasible despite imperfections in today’s models.
Following its discovery, Anthropic banned the relevant accounts, notified affected entities, coordinated with authorities, and introduced new defensive mechanisms, including improved classifiers for detecting novel threat patterns. The company is now prototyping early-warning systems designed to flag autonomous intrusion attempts and building new investigative tools for large-scale distributed cyber operations.
Read more: Microsoft’s ‘Magentic Marketplace’ Reveals How AI Agents Can Collapse Under Pressure
The firm argues that while these capabilities can be weaponized, they are equally critical for bolstering defensive readiness. Anthropic notes its own Threat Intelligence team relied heavily on Claude to analyze the massive datasets generated during the investigation. It urges security teams to begin adopting AI-driven automation for security operations centers, threat detection, vulnerability analysis, and incident response.
However, the report warns that cyberattack barriers have “dropped substantially” as AI systems allow small groups—or even individuals—to execute operations once limited to well-funded state actors. Anthropic expects rapid proliferation of these techniques across the broader threat environment, calling for deeper collaboration, improved defensive safeguards, and broader industry participation in threat sharing to counter emerging AI-enabled attack models.
FAQ ❓