Overview
Daniel Miessler predicts that 2026 will be a pivotal year where cybersecurity becomes an AI arms race between attackers and defenders. Companies will increasingly rely on AI agents to handle security tasks due to the difficulty of scaling human security teams against constant, AI-powered attacks.
Key Arguments
- **Security will become an AI vs. AI competition where the primary question for companies is how good their attackers’ AI is versus their own defensive AI capabilities.**: CISOs are realizing there’s no way to scale human teams to deal with constant, continuous, and increasingly effective AI-powered attacks. The speed of asset management, attack surface management, and vulnerability management must match the pace of automated attacks.
- **Organizations will shift from hiring humans to deploying AI agents for security work to avoid recruitment friction.**: Finding, vetting, interviewing, and onboarding good security people is extremely difficult and time-consuming. AI agents, while not yet matching experienced security professionals in quality, will be adopted as a way to sidestep these hiring challenges when they become ‘good enough’ around mid-2026-2027.
- **Security coding training will finally become effective because it will be designed for AI systems rather than humans.**: Traditional security training fails because humans are primarily driven by promotion and pay incentives that prioritize features over security. AI doesn’t have this limitation - it can maintain multiple priorities simultaneously and be programmed to never deprioritize security concerns.
- **Asset management will become feasible for the first time through AI agents.**: Asset management has been an unsolvable problem for human teams because there’s too much to monitor and it changes too frequently. AI agents are becoming competent enough to handle this continuous monitoring, and the bar for improvement is low since current asset management is so poor.
Implications
This represents a fundamental shift in cybersecurity strategy where organizations must invest in AI-powered defense systems or risk being overwhelmed by AI-enhanced attacks. Companies need to start preparing now for an environment where traditional human-centered security approaches will be insufficient, and those who fail to adopt agentic security platforms may find themselves defenseless against automated, continuous threats.
Counterpoints
- AI agents may not be ready for critical security decisions: The author acknowledges that agents won’t match the quality of experienced security professionals in 2026-2027, raising questions about whether organizations should rely on them for critical security functions.
- Over-reliance on automation could create new vulnerabilities: While not explicitly mentioned, shifting from human judgment to AI systems could introduce new attack vectors and reduce human oversight in security decision-making.