Google’s AI Agent ‘Big Sleep’ Stops Live Cyber Threat, Marks First AI-Driven Exploit Prevention
Developed by DeepMind and Project Zero, the AI tool identified a critical vulnerability in real time, signaling a shift in how cyber threats are intercepted and neutralized.

(Photo: SBR)
MOUNTAIN VIEW, Calif., July 16, 2025 – Google on Tuesday said that it nipped a potential hacking attempt in the bud. The company stated that a large language model (LLM) it developed to find vulnerabilities recently discovered a bug that hackers were preparing to use.
Over the last year, ‘Big Sleep’ has emerged as a key asset in Google’s security team.
Google CEO Sundar Pichai announced that the company’s AI agent, ‘Big Sleep,’ successfully identified and thwarted a cyber exploit before it could be deployed.
Notably, this marks a first-of-its-kind achievement for Artificial Intelligence (AI) in threat prevention.
“New from our security teams: Our AI agent Big Sleep helped us detect and foil an imminent exploit. We believe this is a first for an AI agent, definitely not the last, giving cybersecurity defenders new tools to stop threats before they’re widespread,” Sundar posted on X.
Big Sleep is a tool developed by Google DeepMind and Project Zero to detect hidden security flaws.
The project, which evolved out of vulnerability research, is supported by LLM technology. In November last year, ‘Big Sleep’ found its first real-world bug. The process of locating bugs has continued unabated since then.
How Big Sleep Detected the Bug
Google is tight-lipped on details regarding who the threat actors were or what indicators were discovered. However, Google’s reliance on ‘Big Sleep’ has significantly increased pressure on hackers, according to cybersecurity experts.
The LLM-assisted vulnerability discovery framework uncovered a security flaw in the SQLite open-source database engine before it could be exploited in the wild. SQLite is widely used by developers.
The vulnerability, tracked as CVE-2025-6965 (CVSS score: 7.2), was a critical flaw that Google said was “only known to threat actors and was at risk of being exploited.”
Google has been leveraging the tool to actively search for and identify unknown security vulnerabilities in software. The rise of ‘Big Sleep’ marks a new chapter in global cybersecurity.
The company claims it was “able to actually predict that a vulnerability was imminently going to be used” and successfully neutralized it in advance.
“We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild,” the company stated.
Identity of Threat Actors
As seen in previous cybersecurity breaches, hacking groups continue to evolve their tactics.
However, the rapid advancement of AI agents, and their increasing accessibility, provides new advantages for cybersecurity defenders.
A Google spokesperson said that the company’s threat intelligence group was “able to identify artifacts indicating the threat actors were staging a zero-day but could not immediately identify the vulnerability.”
“The limited indicators were passed along to other Google team members at the zero-day initiative, who leveraged Big Sleep to isolate the vulnerability the adversary was preparing to exploit in their operations,” the spokesperson added.
In a blog post highlighting a range of AI developments, Google noted that since Big Sleep’s debut in November, it has discovered multiple real-world vulnerabilities, “exceeding” the company’s expectations.
Google said it is now using Big Sleep to help secure open-source projects and described AI agents as a “game changer” because they “can free up security teams to focus on high-complexity threats, dramatically scaling their impact and reach.”
By detecting a live exploit before it launched, Google’s AI has moved from passive detection to active defense, setting a new benchmark in cybersecurity.
Inputs from Saqib Malik
Editing by David Ryder