Google researchers have discovered the first tangible evidence of hackers using artificial intelligence to discover and exploit previously unknown software vulnerabilities.
A criminal hacking group recently leveraged artificial intelligence to detect and weaponize a “zero-day” vulnerability, according to New York Times. Google’s Threat Intelligence Group reported “high confidence” that AI supported the discovery of the bug, which targeted a popular administration tool. While the specific AI platform remains unidentified, Google confirmed it was not its own Gemini chatbot.
The find shifts long-held theoretical fears into reality. “It’s a taste of what’s to come,” said John Hultquist, Google’s chief analyst. Experts noted the code contained AI “fingerprints,” such as excessive explainer text. The discovery arrives as the Trump administration evaluates policing advanced AI models like Anthropic’s Mythos. Rob Joyce, former NSA cybersecurity director, warned that “AI-authored code does not announce itself,” emphasizing the urgent need for new defensive frameworks.

