
According to CoinEdition on May 12, Google’s Threat Intelligence Group released a report warning that attackers have used large language models in real-world cyberattacks affecting global systems, and confirmed that hackers have developed a Python-based zero-day vulnerability that can bypass two-factor authentication (2FA) security mechanisms; Google said that related activity is linked to state-sponsored cyberattacks and the abuse of AI tools within underground hacker networks.
According to the Google Threat Intelligence Group report, AI tools are helping attackers enhance capabilities at almost every stage of cyberattacks, including: faster discovery of software vulnerabilities, automating parts of attack workflows, and improving attack techniques such as phishing and malware creation.
The report recorded a specific case: hackers used a Python-based zero-day vulnerability to successfully bypass two-factor authentication (2FA); Google explained that the attack still requires valid login credentials, showing the flaw stems from the system design itself rather than a technical software defect.
The report also noted that attackers used a security vulnerability database containing thousands of known vulnerabilities and exploit examples to train their attack methods, in order to identify patterns that help discover new weaknesses; hackers also used AI to generate false code, tamper with payloads, and write dynamic scripts to evade detection, and in some cases, the AI system can send commands to infected devices in real time.
According to the Google Threat Intelligence Group report, organizations associated with China and North Korea are leading in adopting the above AI-assisted attack methods; specific tactics include carefully designed prompt messages to steal security information from systems, and posing as cybersecurity experts to conduct potential vulnerability scanning of firmware and embedded devices.
According to CoinEdition, Google said it is strengthening AI-based defensive measures: the Big Sleep system is used to automatically identify vulnerabilities; the CodeMender system is used to automatically patch vulnerabilities; and Gemini has built-in security measures to prevent suspicious activity in users’ accounts.
According to CoinEdition’s report on May 12, 2026, the Google Threat Intelligence Group report confirms that hackers used a Python-based zero-day vulnerability to successfully bypass the 2FA security mechanism; Google said that the vulnerability comes from the system design itself, and that the attack can only be carried out with valid login credentials.
According to the Google Threat Intelligence Group report, organizations associated with China and North Korea were the first to adopt AI tools to carry out cyberattacks. Tactics include carefully designed prompt messages to steal security information, as well as disguising themselves as cybersecurity experts to scan firmware vulnerabilities.
According to CoinEdition, Google has deployed AI defense systems such as Big Sleep (automatically identifying vulnerabilities), CodeMender (automatically patching vulnerabilities), and Gemini’s built-in security measures (blocking suspicious activity in accounts).
Related Articles
A real-life Transformer! UBTECH Technology launches the world’s first mass-produced robot vehicle, priced at $570k
Thinking Machines Model Ties GPT-Realtime-2 for First Place in Audio Benchmark Today with 43.4% APR Score
Arthur Hayes Predicts Bitcoin to Return to $126,000 Amid U.S.-China AI Arms Race and War Inflation
OpenAI Projected to Save $97 Billion in Microsoft Payments by 2030 Under New Agreement