Every risk management framework in cybersecurity relies on the same fundamental equation: Risk = Likelihood × Impact. But what happens when one half of that equation becomes meaningless overnight?
If your organization uses any of the major cybersecurity risk assessment frameworks like NIST SP 800-30, CVSS, OWASP Risk Rating Methodology, or ISO/IEC 27005 you could have a critical problem that needs to be addressed immediately. These frameworks all factor attack complexity and attacker sophistication into their likelihood calculations. That assumption just became dangerously obsolete for the vast majority of scenarios.
The Complexity Trap
Let me be specific about what I mean. The CVSS v3.1 specification explicitly defines "Attack Complexity" as a core exploitability metric, measuring "the conditions beyond the attacker's control that must exist in order to exploit the vulnerability." It assigns vulnerabilities either "Low" complexity (no specialized access conditions) or "High" complexity (requiring "measurable amount of effort in preparation or execution against the vulnerable component").
The OWASP Risk Rating Methodology goes even further, breaking down likelihood assessment by "Skill Level" required, explicitly scoring threats from "No technical skills (1)" up to "Security penetration skills (9)." Their "Ease of Exploit" factor ranges from "Theoretical (1)" to "Automated tools available (9)."
NIST SP 800-30 Rev. 1 defines likelihood assessment based on adversary capabilities, stating organizations should evaluate "the likelihood that threat events once initiated or occurring, will result in adverse impacts" by considering the complexity and sophistication required for successful exploitation.
For decades, this made perfect sense. Complex custom protocols, proprietary systems, and sophisticated multi-step attacks required significant expertise, time, and resources. Risk teams rightfully downgraded threats that were "too complex to exploit practically."
But AI has systematically eliminated the vast majority of these barriers. Let me dig into the crux of the issue:
The New Reality: Most Complexity Barriers Have Collapsed
As someone who has spent months mastering AI development tools like Windsurf, CoPilot,and Claude Code, I can tell you firsthand that the game has fundamentally changed. These aren't just productivity enhancers for legitimate developers; they're force multipliers that eliminate the expertise barriers that have protected complex systems for decades.
This doesn't mean every complexity barrier has vanished overnight. Air-gapped systems, timing-dependent attacks requiring precise environmental conditions, and certain theoretical cryptographic exploits still present meaningful barriers. But the vast majority of "complex" custom implementations, proprietary protocols, and multi-step attacks that traditionally required specialized expertise? Those barriers have been democratized.
This isn't theoretical. During a recent penetration test, I encountered a client's custom WebSocket protocol with proprietary encoding—exactly the type of "complex" system that risk frameworks classify as low probability for exploitation. Using AI-assisted development tools, I built a decoding tool for the obfuscated administrative traffic in under five hours. This gave me visibility into the application's behavior and allowed me to test for injection flaws and other vulnerabilities that would have been invisible without that understanding. As an expert who likely would have skipped this analysis in previous tests due to time constraints, I can say with confidence: what traditionally would have required days of specialized reverse engineering became an afternoon project.
Any CISO who thinks their adversaries aren't leveraging these same tools is living in the past. The technical barriers that once required nation-state resources or elite criminal expertise have been democratized. AI doesn't care about your custom protocol's complexity—it excels at pattern recognition and code generation regardless of how obscure your implementation might be.
The Audit You Need to Conduct Today
Here's your immediate action plan: Go through your risk register and identify every entry where likelihood was reduced due to technical complexity or sophistication requirements. The question isn't whether AI has eliminated ALL barriers, but whether it has lowered them enough to materially change your risk calculations.
Look specifically for risks reduced due to:
- CVSS "High" Attack Complexity scores: Systems rated lower risk because they required "specialized access conditions or extenuating circumstances"
- OWASP skill-level assumptions: Threats downgraded because they required "advanced computer user (5)" or "network and programming skills (6)" rather than basic technical skills
- NIST adversary capability gaps: Risk assessments that assumed attackers lacked the sophistication for complex, multi-step attacks
- ISO 27005 exploitability barriers: Likelihood reductions based on the assumption that custom implementations or proprietary systems provided meaningful protection
For each identified risk, ask: "Would AI-assisted tools significantly lower the skill barrier or preparation time required for this attack?" If the answer is yes, your likelihood assessment needs updating.
Beyond Individual Risk Items
This isn't just about updating risk scores. The frameworks themselves need fundamental revision:
For CVSS users: The "Attack Complexity" metric can no longer automatically distinguish between low and high complexity attacks when AI tools eliminate most preparation barriers. Every "High" complexity rating deserves scrutiny—many should become "Low."
For OWASP Risk Rating implementations: Your skill-level assessments need systematic review. The gap between "some technical skills (3)" and "security penetration skills (9)" has collapsed for most scenarios involving code analysis, protocol reverse engineering, and custom exploit development.
For NIST SP 800-30 adopters: Your likelihood calculations based on adversary sophistication requirements are outdated. The document's assumption that complex attacks require correspondingly sophisticated adversaries no longer holds for the majority of technical vulnerabilities.
For ISO/IEC 27005 frameworks: Your vulnerability exploitation assessments that factor in technical complexity as a likelihood reducer need immediate review. Custom implementations no longer provide meaningful security through obscurity.
The Strategic Imperative
This isn't a minor calibration issue. It's definitely a fundamental shift requiring immediate C-level attention. Organizations that continue using outdated complexity-based risk reduction are essentially flying blind. Your most "secure" systems may actually be your most vulnerable, precisely because they haven't been tested against AI-powered attack capabilities.
The question isn't whether this affects your organization. The question is how quickly you'll recognize that while some complexity barriers remain meaningful, the vast majority have been democratized by AI tools. Every published vulnerability involving custom code, proprietary protocols, or complex multi-step attacks needs re-evaluation.
Your next board meeting should include one agenda item: a systematic audit of accepted risks based on attack complexity assumptions. Because in an AI-powered threat landscape, most complexity isn't protection—it's just delayed inevitability.
Have you started re-evaluating your complexity-based risk assumptions? What percentage of your risk register was based on sophistication requirements that AI tools now democratize? Share your experience in the comments.
References:
- NIST SP 800-30 Rev. 1: Guide for Conducting Risk Assessments
- CVSS v3.1 Specification Document - Common Vulnerability Scoring System
- OWASP Risk Rating Methodology - Factor Analysis Framework
- ISO/IEC 27005:2022 - Information Security Risk Management
- The Real AI Revolution in Penetration Testing: Custom Tooling at Lightning Speed - Secure Ideas