May 10 / Latest News

Silicon Saboteurs: Google Uncovers First AI-Generated Zero-Day Exploits

Google Threat Intelligence Group has released a landmark report today confirming the first documented case of hackers utilizing artificial intelligence to engineer zero-day exploits, marking a sophisticated shift in the digital arms race.

While threat actors have historically used AI for minor tasks like drafting phishing emails, researchers discovered a Python-based script designed to bypass two-factor authentication that bore the unmistakable fingerprints of machine generation. The code lacked human-like logic patterns, instead featuring an abundance of educational docstrings and a hallucinated, non-existent CVSS security score, leading experts to conclude that the exploit was synthesized by a large language model rather than a human programmer.

The report identifies state-sponsored groups from the People’s Republic of China and North Korea as the primary drivers of this evolution, with units such as APT45 and UNC2814 training models on massive datasets of legacy security cases to automate the auditing of modern software. Beyond exploit development, these attackers are deploying autonomous agents like Hexstrike and Strix to execute complex, multi-stage "agentic workflows" against high-value targets, including a recent breach of a Japanese technology firm. These systems utilize advanced memory frameworks to map corporate hierarchies and fingerprint hardware environments, allowing for highly customized and automated infiltration.

The threat landscape has expanded further into the mobile and supply chain sectors with the emergence of the PROMPTSPY Android backdoor. This malware utilizes a specialized automation agent to monitor user screens and interact with device interfaces in real-time. Simultaneously, a group known as TeamPCP has begun injecting AI-driven malicious code into critical software supply chain tools like LiteLLM to harvest cloud credentials and GitHub tokens for extortion. Even as pro-Russian campaigns deploy AI voice cloning to impersonate journalists, Google has pivoted to a defensive stance by deploying its own AI tools, Big Sleep and CodeMender, to proactively identify and repair these machine-generated vulnerabilities.