We can now add cybercrimes to the list of growing concerns associated with artificial intelligence. Google’s Threat Intelligence Group (GTIG) said it discovered, for the first time ever, a threat actor using a zero-day exploit that it believes was developed by AI.” Zero-day vulnerabilities are often the most dangerous since they’re unknown to the targets, leaving them with zero days to prepare for the attack.
Google said in the report the threat actor was planning to use it in a “mass exploitation event,” but its proactive discovery “may have prevented its use.” Google added that it doesn’t believe its own Gemini models were used, but still has “high confidence” an AI model was part of discovering the vulnerability and weaponizing an exploit.
The GTIG report didn’t identify the target but said Google notified the unnamed company, who then patched the issue. Google didn’t reveal the bad actors either, but hinted at those associated with China and North Korea having shown “significant interest” in using AI for exploiting security vulnerabilities.
With how fast AI models have evolved for everyday use, it’s not surprising that they would be used with malicious intent. In an interview with The New York Times, John Hultquist, the chief analyst at GTIG, characterized it as “a taste of what’s to come” and “the tip of the iceberg,” adding that this case was just the first “tangible evidence” of these sorts of attacks. Google said in its report that threat actors have been using AI in different stages of a cyberattack, but that “AI can also be a powerful tool for defenders.” Like Google, other companies are using AI models to power preventative measures. Last month, Anthropic announced Project Glasswing, an initiative tasked with using Claude Mythos Preview to find and defend against “high-severity vulnerabilities.”
