Attackers Hijack TanStack, OpenSearch, Mistral Official Pipelines, Push 84 Malicious Versions on May 12

AWS-2.06%

According to Beating’s monitoring, on May 12 at 3:20–3:26 UTC+8, attackers affiliated with TeamPCP hijacked the official release pipelines of TanStack, Amazon’s OpenSearch, and Mistral, pushing 84 malicious package versions across npm and PyPI. Affected packages include @tanstack/react-router (10M+ weekly downloads), @opensearch-project/opensearch (1.3M weekly downloads), and Mistral’s mistralai client. The malicious packages bypassed security trust mechanisms by exploiting GitHub Actions configuration flaws to obtain legitimate temporary publishing credentials, allowing them to acquire valid SLSA build provenance signatures.

Socket.dev’s reverse analysis reveals the worm persists even after package removal by injecting code into Claude Code execution hooks (.claude/settings.json) and VS Code task configurations (.vscode/tasks.json). On Python packages, the malware activates silently upon import without requiring function calls. Affected machines should be treated as compromised; users must immediately rotate AWS, GitHub, npm, and SSH credentials and reinstall from clean lockfiles.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Hundred-million-dollar startup Thinking Machines releases a real-time interactive AI model, with the slogan “speak, listen, and execute at the same time”

Founded by former OpenAI executives Mira Murati and John Schulman, Thinking Machines, an AI startup valued at more than $100 million, on Tuesday rolled out a preview of its first full-duplex AI model that can “speak and listen at the same time,” with latency as low as 0.4 seconds, challenging today’s human-AI real-time interaction paradigm. (NVIDIA invests in Thinking Machines Lab to deploy Vera Rubin, boosting the performance of frontier models) Thinking Machines’ new model: breaking the old tu

ChainNewsAbmedia34m ago

Ixirpad Partners With Cware Labs to Support AI and Web3 Startups

According to an announcement on May 11, Ixirpad entered into a strategic partnership with Cware Labs to accelerate sustainable infrastructure development in the Web3 industry. Cware Labs, operating as a venture studio, will identify and support high-potential blockchain and AI projects. The

GateNews1h ago

Claude Code Agent View: single-screen management concurrent sessions

Anthropic on May 11 introduced a new feature for Claude Code called “Agent View,” which consolidates multiple simultaneously running Claude Code work sessions into a single screen for management, eliminating the need to switch back and forth between multiple terminal tabs. According to Anthropic’s official blog, the feature is rolling out in a Research Preview format and is available for Pro, Max, Team, Enterprise, and Claude API solutions. A single post on the official X account has received mo

ChainNewsAbmedia1h ago

Karpathy Endorses HTML Output for Large Language Models, Predicts Interactive Neural Video as Ultimate Form

According to Andrej Karpathy, OpenAI founding member and "vibe coding" concept creator, today he endorsed the Claude Code team's approach of using HTML instead of Markdown for large language model outputs. Karpathy outlined an evolution roadmap for AI interaction interfaces: from plain text to Markd

GateNews1h ago

Austrac Warns of AI-Driven Money Laundering Risks as Australia Expands Anti-Money Laundering Rules July 1

According to Austrac, on May 12, Australia's financial intelligence agency warned that artificial intelligence is raising money laundering risks by enabling criminals to fabricate identities, forge documents, and hide proceeds faster and at greater scale. Starting July 1, 2026, real estate

GateNews1h ago

Google: Large language models are being used for real-world attacks; AI can bypass dual-factor authentication security mechanisms

According to CoinEdition on May 12, Google’s Threat Intelligence Group released a report warning that attackers have used large language models in real-world cyberattacks affecting global systems, and confirmed that hackers have developed a Python-based zero-day vulnerability that can bypass two-factor authentication (2FA) security mechanisms; Google said that related activity is linked to state-sponsored cyberattacks and the abuse of AI tools within underground hacker networks. Specific Applica

MarketWhisper1h ago
Comment
0/400
No comments