SoftEd Blog

AI Is Finding More Bugs Than Ever. The Patching Gap Is Still Where Breaches Happen. | SoftEd

Written by David Mantica | May 16, 2026

For years, the conventional wisdom about AI in cybersecurity ran something like this: large language models are good at writing phishing emails and bad at finding real bugs. That narrative is shifting. The May 2026 Patch Tuesday cycle produced one of the most striking data points yet in the AI-versus-vulnerability arms race — and it has direct implications for how organizations train their security and development teams.

According to security journalist Brian Krebs, Microsoft alone shipped fixes for at least 118 vulnerabilities this month, while Apple addressed 52, Google's Chrome team patched 127, and Mozilla's Firefox 150 release resolved a remarkable 271 issues. Oracle, meanwhile, fixed roughly 450 flaws in its most recent quarterly update and is shifting to a monthly cadence. A common thread runs through several of these releases: Project Glasswing, a much-hyped AI capability developed by Anthropic and given to a few dozen tech giants, which Krebs reports "appears quite effective at unearthing security vulnerabilities in code." Mozilla's 271-bug Firefox 150 release, in particular, resolved vulnerabilities reportedly discovered during the Glasswing evaluation.

The asymmetry is shifting

For most of the modern security era, defenders have been outnumbered. Attackers needed to find one exploitable flaw; defenders needed to find them all. AI-assisted code auditing is changing that math. When a single evaluation helps Mozilla discover and patch 271 bugs in one release, the implication is clear: vendors with access to AI-driven vulnerability discovery are shipping dramatically more security fixes, and the patch cadence the rest of us live by is accelerating.

That acceleration is already visible. Firefox has moved to a more aggressive weekly cadence for security updates. Oracle is going monthly. Microsoft's May cycle is notable not only for its volume but for what's absent — Krebs notes it is the first Patch Tuesday in nearly two years in which Microsoft is not shipping fixes for emergency zero-day flaws that are already being exploited. For IT and security operations teams, the takeaway isn't "we can relax." It's the opposite: the operational tempo of patching, testing, and deploying is increasing, and the teams who can keep up will be the ones who've invested in modern vulnerability management practices and automation.

But AI cuts both ways

The same week Project Glasswing was making headlines for defensive wins, two other incidents underscored why this story isn't a clean victory lap. First, the education platform Canvas was hit with a data extortion attack from the cybercrime group ShinyHunters, who defaced the login page with a ransom demand threatening to leak data on 275 million students and faculty across nearly 9,000 educational institutions. The disruption forced parent company Instructure to take the platform offline during a period when many of the affected schools were in the middle of final exams.

Second, a KrebsOnSecurity investigation revealed that a Brazilian DDoS-protection firm, Huge Networks, had its infrastructure leveraged to orchestrate massive DDoS attacks against other Brazilian ISPs. The firm's CEO told Krebs the malicious activity resulted from a security breach, likely the work of a competitor. The botnet was built largely by scanning the internet for TP-Link Archer AX21 routers still vulnerable to CVE-2023-1389, a command injection flaw patched back in April 2023. Two and a half years after a patch was available, the unpatched devices were still numerous enough to fuel a sustained attack campaign.

The lesson is uncomfortable: AI is making it easier to find vulnerabilities, but the patching gap — the time between a fix being available and it actually being deployed — remains a major source of preventable breaches. Discovery is less and less the bottleneck. Operationalizing the response is.

What this means for L&D and workforce planning

For learning leaders and IT managers, the AI-accelerated security landscape reshapes priorities in three concrete ways.

Patch management is becoming a core competency, not a back-office task. When major vendors are collectively shipping hundreds of fixes per month, the skills around vulnerability triage, change management, and automated deployment become differentiators. Foundational certifications like CompTIA Security+ and role-based training in vulnerability management deserve a fresh look — not because they're new, but because the cadence they describe is now the daily reality.

Secure coding training matters more, not less. It's tempting to assume that if AI can find bugs, developers don't need to. The opposite is true. AI tools are most effective when paired with developers who understand what the tool is flagging and why. Courses in secure software development, threat modeling, and DevSecOps practices help engineers interpret AI findings, prioritize fixes, and avoid the same classes of flaws in new code.

Cloud and identity skills are the connective tissue. Most modern patching, monitoring, and incident response runs through cloud-native tooling. Training programs covering AWS, Azure, and GCP security services — combined with identity-focused content aligned to NIST guidance — are how teams translate "a patch exists" into "the patch is deployed across our fleet by end of week."

The strategic read

The Project Glasswing story is, in one sense, a feel-good moment for the AI industry: a clear, measurable case of AI making software materially safer. But it's also a forcing function. Vendors with AI-discovered fixes will ship them faster. Attackers will reverse-engineer those patches faster. And the window between disclosure and active exploitation will continue to shrink.

Organizations that treat cybersecurity training as a one-time compliance checkbox will fall further behind. The ones that build continuous, role-aligned learning paths — covering everything from patch hygiene to secure development to cloud security architecture — will be the ones who can actually capitalize on AI's defensive promise. The tools are getting better. The question is whether the people running them are keeping pace.

Sources