Anthropic’s recent announcement about the unprecedented cyberattack capabilities of its new Claude Mythos Preview model should serve as a much-needed wake-up call for the industry.
For years, cybersecurity operated on the assumption that if a vulnerability existed, there was a decent chance no one had found it yet. That assumption no longer holds. Recent advances in AI-driven security have made clear that while nothing fundamentally new has been introduced, the timeline has changed. The distance between “a vulnerability exists” and “it’s being actively exploited” is collapsing faster than most organizations are prepared to handle.
This didn’t happen overnight. AI has been steadily accelerating vulnerability discovery for years, scanning complex codebases in minutes, surfacing unknown flaws at scale, and mapping attack paths with increasing sophistication. But what we’re seeing now is something more consequential: AI is beginning to connect the dots. Vulnerabilities that once sat isolated, low severity, manageable, often deprioritized, can now be stitched together into viable exploit paths with minimal human involvement. What used to require deep expertise and time is becoming increasingly automated. That barrier is eroding, and with it, one of the quiet protections defenders have long relied on.
Security programs have always depended on time: time to discover an issue, time to understand it, time to fix it. AI is compressing the first part of that equation dramatically, while everything else—testing cycles, change management, operational realities—remains largely unchanged. The result is a growing imbalance, as vulnerabilities are being discovered faster than they can be remediated. That expanding gap is where risk now lives.
As discovery becomes faster and more accessible, more vulnerabilities are known, more actors are capable of finding them, and the overlap between discovery and exploitation continues to grow. It’s not that defenders are suddenly less capable, but that the environment they’re operating in is moving at a different speed.
The underlying asymmetry remains the same. Attackers still only need one viable path. Defenders are still responsible for all of them. AI is simply making that imbalance more pronounced.
The organizations adapting to this moment are adjusting their assumptions and operating as if vulnerabilities will be discovered quickly, because they will be. They’re investing more heavily in containment strategies, knowing prevention alone can’t keep pace. They’re getting more disciplined about prioritization, accepting that not everything can be fixed immediately, and focusing instead on reducing real exposure. And importantly, they’re preparing for a near future where these capabilities are widely accessible, not tightly controlled.
At the same time, there’s a coordinated industry-wide effort to get ahead of this shift. Initiatives like Project Glasswing, which brings together major players such as Amazon, Google, Microsoft, Apple, Cisco, and CrowdStrike, are putting advanced AI tools directly into the hands of those responsible for securing critical infrastructure. The goal is to use the same capabilities driving this acceleration to find and fix vulnerabilities before they can be exploited at scale.
These steps reflect a clear recognition that traditional approaches won’t scale to meet what’s coming. They don’t change the trajectory, however. The forces pushing this forward, open research, competitive pressure, nation-state investment, aren’t slowing down. Over time, these capabilities will spread, access will broaden, and the compression of the vulnerability lifecycle will continue.
The organizations that navigate this well won’t be the ones trying to keep up with discovery alone. They’ll be the ones that have built for resilience from the start, designed to contain issues, prioritize intelligently, and move with speed when exposure inevitably occurs. The ones that don’t may soon find that the gap between “secure” and “exposed” has quietly disappeared.