Overconfident and Underprepared: IT Leaders Misjudge AI Cyber Risk
AI-generated malware is exploding in volume and sophistication. Legacy cyber tools, built on signatures, heuristics, and aging machine learning, are failing spectacularly in this new era of Dark AI. Yet confidence in these legacy cyber tools remains remarkably high, creating a widening disconnect between perception and reality.
In this blog, we dig into the results from our new study of 500 U.S. IT professionals, which clearly highlights that IT professionals, especially in management positions, don’t realize just how quickly the new AI-driven threat landscape is shifting beneath their feet.
AI Threats Are Growing Faster Than Defenders Think
Our study found that 64% of IT professionals believe fewer than one million pieces of AI-generated malware are created each day. Public statistics reveal that more than half a million new malware variants are detected daily. The key word: detected. That number reflects only what legacy tools manage to catch. In reality, the daily volume is likely 10x higher, or more. And the more we examined the data, the more concerning the picture became.
When Deep Instinct threat analyst Brian Black put legacy tools to the test using AI-generated malware, 65 of 73 failed to detect it, a staggering 89% miss rate. If similar patterns hold in production environments, the true daily volume of unseen malware could easily exceed 5M variants.
Rather than treat that miss rate as a hard indicator of global malware volume, we frame it as a thought experiment, a way to illustrate just how dramatically detection-based systems can undercount what’s really happening in the wild. And it raises uncomfortable “what ifs.” What if legacy tools are only catching the easiest-to-spot threats (i.e., signature-based)? What if the truly damaging malware is the most evasive? And ultimately, the volume matters because undetected threats compound: as the pool of unseen malware grows, so does the likelihood that a single breach will have catastrophic consequences.
As attackers continue automating malware generation with AI, the true scale of the threat is almost certainly orders of magnitude larger than public detection numbers suggest. What we can confidently say is this: threats are accelerating, legacy tools are buckling, and defenders do not realize how far behind they are.
Legacy Systems Are Failing in the Face of AI
This growing threat is concerning on its own, but our study revealed an even deeper issue: confidence in outdated defenses remains insanely high, both literally and figuratively, even as their effectiveness declines.
Real-world data reinforces this disconnect. According to the Identity Theft Resource Center’s H1 2025 Data Breach Report, there were 1,732 publicly reported data compromises in the first half of the year, continuing an upward trend that shows how frequently attackers are bypassing legacy security controls. Breaches are rising while confidence remains high, underscoring a widening gap between perception and reality.
Here is what we found:
- Overall confidence remains high: 86% of IT professionals believe their existing tools can stop AI-generated malware pre-execution.
- A generational confidence curve reinforces the trend: 82% of Baby Boomers, 84% of Gen X, 87% of Millennials, and 90% of Gen Z believe their technologies can stop AI-generated threats. The differences are subtle, but the trend is clear: confidence steadily rises in younger cohorts – yet even the least confident generation reports optimism that far outpaces the real-world performance of legacy defenses. The risk is not that one generation is overconfident; it is that every generation is.
- Seniority strongly influences confidence: Directors and above are nearly twice as likely as frontline staff to be very confident their organization can stop malware attacks (42% vs. 24%).
This overconfidence is dangerous. Confidence in legacy cyber tools leads organizations to complacency as they continue to fund defenses that quietly fail, giving attackers the advantage and leaving security teams to fight blind.
FinServ: The Historical Leader Now Falling Behind
For decades, the finance services industry has been at the bleeding edge of cybersecurity – navigating a business environment where risk evaluation is uncompromising, controls are mature, and security spend is nonnegotiable.
But our new data tells a different story:
- Only 26% of IT professionals in the finance industry feel very confident in stopping AI-generated attacks pre-execution – the lowest among major industries.
- Meanwhile, tech and software show the highest confidence (46% very confident), despite operating in some of the most rapidly changing and highly targeted environments.
Finance’s pullback should be a clear warning to all other industries. The sector’s declining confidence is not theoretical. Recent attacks, such as the breach at financial services vendor SitusAMC, which exposed sensitive data tied to major banks, show how even well-defended institutions remain vulnerable when third-party systems rely on outdated models.
If the most sophisticated and well-resourced sector – the one built on risk modeling and paranoia – is losing confidence, it signals a broader unraveling underway, and the outlook for other industries is alarming.
Healthcare, for instance, is already struggling to keep pace. Its attack surface is large, porous, aging, and expanding faster than security leaders can reinforce it. Healthcare environments remain deeply dependent on legacy infrastructure, fragmented systems, and outdated tech and vendor stacks. Within the last year, more than 500 healthcare breaches were reported. Their attack surfaces are sprawling, their budgets are strained, and modernization is routinely slowed by regulatory complexity and technical misalignment.
If finance is signaling concern, healthcare is already on the front lines. And as both sectors face growing pressure, organizations across every industry are accelerating their AI adoption to keep up, often moving faster than their strategies can support.
The Paradox: Too Slow. Too Fast. Never Strategic Enough.
Organizations across industries are racing to adopt AI technologies to stay competitive, innovate quickly, and keep pace with a rapidly shifting threat landscape. According to our survey, 45% consider themselves early adopters or fast followers (20% early adopters; 25% fast followers).
But being the first or fastest to adopt new technology is not a strategy. Speed without direction is how organizations drive straight into the risks they’re trying to outrun. In the rush to modernize, many organizations default to large, familiar security vendors. Not because these vendors offer the strongest protection, but because they appear to be the safest choice. This creates the most dangerous inversion of all.
The faster organizations move, the more likely they are to adopt tools that still rely on outdated detection models, leaving them more vulnerable even as they lower their guard.
Closing the Gap: Why Deep Learning is the Only Path Forward
Traditional security models built on signatures, heuristics, and legacy machine learning models weren’t designed for today’s modern threats. Malware mutates in seconds, AI-generated variants bypass static logic instantly, and attackers automate creativity at a scale defenders cannot match. Detection-based systems simply cannot keep up.
Deep learning changes that equation by enabling true preemptive data security. It operates at machine speed, identifies never-before-seen threats before execution, and delivers consistent outcomes that give leaders and practitioners a clear view of their real readiness. In an environment defined by rapid mutation and automation, preemptive protection is no longer optional. It is the only viable path forward.
Our research illustrates just how urgent this shift has become. In another recent blog, we detailed Nimbus Manticore, a sophisticated, AI-engineered malware strain that bypassed every legacy on VirusTotal, except for Deep Instinct – for a full week. Discoveries like this highlight a growing and dangerous reality: attackers are advancing faster than detection-based defenses can respond, leaving organizations exposed to threats they never even see.
In 2026 and beyond, the organizations that endure will be those that proactively confront the gap between perception and reality and move beyond legacy detection models that attackers already know how to evade. Achieving real preparedness will require leaders to retire outdated assumptions and adopt deep learning-native approaches built for the speed, scale, and sophistication of modern threats.
Finance’s caution is a warning. Healthcare’s exposure is a preview. And the broader industry’s naive optimism is a dangerous miscalculation. As millions of AI-generated malware variants rewrite the rules of cyber defense each day, survival will depend on a willingness to rebuild security strategies from the foundation up through a preemptive lens.

