Google thwarts hacker group’s AI-driven mass exploitation plan
The search giant says it intercepted a criminal hacking group attempting to use AI to plan a mass vulnerability exploitation event targeting two-factor authentication.
Share
Add us on Google by Editorial Team May. 12, 2026Google says it likely stopped a criminal hacking group from using artificial intelligence to orchestrate a mass exploitation attack, one that specifically targeted the bypass of two-factor authentication through a zero-day vulnerability. The intervention, disclosed by Google’s Threat Intelligence Group, offers a concrete look at how AI is reshaping the cat-and-mouse game between attackers and defenders in cybersecurity.
What happened and why it matters
Google’s Threat Intelligence Group identified a hacking operation that leveraged AI tools to research and plan the exploitation of a zero-day flaw. The attackers were using AI to find a previously unknown software vulnerability, then automating the process of weaponizing it at scale, specifically to defeat 2FA protections. Google’s defenses caught and neutralized the attempt before it could be deployed broadly.
Google’s analysts linked the broader trend of AI-assisted hacking to state-sponsored actors, particularly groups associated with Iran, China, North Korea, and Russia. These advanced persistent threat (APT) groups have been increasingly integrating AI into their operations, using it for reconnaissance, vulnerability research, and automating tasks that previously required significant human effort.
Google’s analysts noted that APT and information operations actors are using AI to accelerate routine hacking tasks rather than inventing entirely new categories of attack. The threats themselves are familiar: phishing, malware deployment, credential theft, 2FA bypass. But the velocity and scale at which they can be executed is increasing dramatically.
The AI security arms race
Google’s AI safeguards have reportedly blocked malicious applications across multiple categories, including phishing campaigns and malware development. The company’s systems appear to be specifically tuned to detect when AI tools are being pointed at vulnerability research and exploitation planning, rather than legitimate security work.
Anthropic, the AI company behind Claude, reportedly delayed the launch of its Mythos model amid security concerns. Security researchers and AI companies are increasingly recognizing that no single organization can address these threats alone, with state-sponsored groups from four different countries independently leveraging AI for hacking operations.
What this means for crypto investors
North Korea’s Lazarus Group alone has been linked to some of the largest crypto heists in history, stealing billions of dollars worth of digital assets to fund the regime. Centralized exchanges, DeFi protocols, and wallet providers all rely on 2FA as a critical security layer, meaning a mass 2FA bypass could result in drained exchange accounts, compromised wallets, and potentially billions in stolen funds.
For individual investors, hardware security keys, which operate on a different authentication mechanism than SMS or app-based 2FA, offer stronger protection against the kind of bypass attacks described here. Moving high-value holdings to cold storage, where the keys aren’t connected to the internet, remains the gold standard.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.