Avoid the engineering-only CISO trap. Learn how AI in cybersecurity supports holistic risk leadership across controls, pipelines, and resilience.

AI-Ready CISO: Avoid the Engineering-Only Trap
A single control can be “perfect” and still get you breached.
That’s the uncomfortable lesson behind several headline-grabbing incidents in 2025—especially in crypto and AI-adjacent companies where the stack changes weekly, third-party dependencies are everywhere, and attackers don’t waste time on your strongest defenses. They go around them.
David Schwed’s “two CISOs” framing nails a pattern I keep seeing: organizations hire a deeply technical, engineering-first security leader expecting rigor, automation, and clean architecture… and end up with a security program that looks great on a diagram but breaks under real pressure. The fix isn’t “hire a less technical CISO.” The fix is hiring (and enabling) a holistic security leader—and using AI in cybersecurity to keep the program balanced across people, process, and technology.
Engineering-focused vs. holistic CISO: the real difference
An engineering-focused CISO optimizes for preventative controls and “secure-by-design” architecture. A holistic CISO optimizes for risk outcomes and resilience, assuming failure will happen and preparing the organization to contain it.
Both CISOs can be smart. Both can ship improvements quickly. The difference is what they treat as the “system.”
The engineer CISO’s operating model
An engineering-first CISO often comes from infrastructure, cloud, or application engineering. Their instincts are valuable: reduce attack surface, standardize controls, automate policy enforcement, harden identity, adopt strong cryptography, build paved roads for developers.
The failure mode is subtle: security becomes a static build problem. If the lock is unpickable, they assume the door is secure.
That assumption breaks in modern environments because attackers target:
- The glue code (verification logic, policy engines, permission checks)
- The pipelines (CI/CD, model deployment, artifact signing, dependency resolution)
- The operators (approvers, on-call engineers, support desks)
- The configuration surface (feature flags, IAM bindings, LLM tool permissions)
The holistic CISO’s operating model
A holistic CISO still cares about engineering quality, but they treat security as an end-to-end socio-technical system. They ask questions engineers often skip because they feel “procedural”:
- Who can change the code that validates critical operations?
- What’s the emergency path, and how is it abused?
- Which identities can bypass controls “temporarily”?
- What’s monitored, what’s not, and why?
- When this fails, what’s our blast radius and time-to-contain?
Holistic CISOs build programs that hold up when controls fail, not just when controls work.
The core fallacy: “You didn’t eliminate risk—you moved it”
The most useful sentence in the source article is also the easiest to miss: engineering-heavy security doesn’t remove risk; it often relocates it.
Here’s how it shows up in practice.
A simple control, an expensive bypass
Take a control like: “Only execute this transaction if a valid digital signature is present.” Cryptography might be solid. Keys might be in HSMs. Everything looks airtight.
Attackers don’t break elliptic curve math. They go after:
- The function that decides what “valid” means
- The build pipeline that compiles and ships the verification code
- The deployment credentials that can swap out that component
- The operational process that approves emergency hotfixes
If they can tamper with verification logic or its delivery path, they don’t need to defeat your cryptography at all.
In 2025, Schwed notes digital asset theft trends and highlights a major exchange breach where the theft wasn’t about cracking algorithms—it was about taking control of operational infrastructure (like wallets and the systems around them) and moving funds fast.
AI systems repeat the pattern—faster
AI products add new “bypass lanes” because they combine:
- Model behavior (prompts, tool calling, system messages)
- Integration permissions (what the model can access and do)
- Supply chain exposure (models, plugins, open source, APIs)
- Human workflows (review, red teaming, safety approvals)
Most teams obsess over the model and underinvest in the permissions boundary.
A memorable way to say it:
Strong controls fail when you mount them on weak workflows.
That’s why a purely engineering-focused CISO can be a liability in AI-heavy organizations. AI increases change velocity—and change velocity punishes brittle programs.
Where AI helps: balancing technical operations with business risk
AI in cybersecurity is most valuable when it supports both sides of the CISO job:
- Technical execution (detection, response, automation)
- Risk management (exposure, prioritization, decision support)
Used well, AI becomes the connective tissue between “we shipped controls” and “we reduced business risk.” Used poorly, AI just accelerates the same tunnel vision.
Bridge point #1: AI can connect security signals to business impact
Most companies have plenty of security telemetry. What they lack is a consistent translation from telemetry to business decisions.
Practical examples of AI-supported translation:
- Correlate identity events, code changes, and production deploys to highlight “high-risk change windows.”
- Summarize incident patterns into board-ready narratives (what happened, what it cost, what changes prevent recurrence).
- Map control failures to critical business services so prioritization isn’t driven by whichever alert is loudest.
This is where holistic CISOs shine: they demand that the program answers, “So what?” AI can make that answer faster and more defensible.
Bridge point #2: ML can detect when a security program becomes inward-facing
Security programs drift. Quietly.
A telltale sign is when teams optimize for internal metrics that don’t match attacker reality: patch counts, tool coverage, policy compliance—while exploit paths remain open.
Machine learning can flag this drift by modeling attack-path likelihood and comparing it to control investment. If you’re spending heavily on a hardened perimeter while lateral movement remains trivial, the model’s mismatch score should be obvious.
Signals that are especially useful for “drift detection”:
- High volume of closed tickets with low production impact
- Repeated exceptions for the same privileged workflows
- Overuse of break-glass accounts or emergency deploy paths
- Increasing mean time to contain despite growing tool spend
The goal isn’t to let ML “grade” your security team. The goal is to surface misalignment early—before attackers do it for you.
Bridge point #3: automated risk assessment needs human strategic oversight
Automated risk scoring is great for speed. It’s also easy to game.
AI models tend to overweight what’s measurable (scanner findings, CVSS, alert volume) and underweight what’s messy (organizational incentives, operational shortcuts, undocumented admin paths).
Holistic leaders put guardrails around AI-driven risk decisions:
- Require human sign-off for risk acceptance on crown-jewel systems
- Audit the model’s “top drivers” monthly (what is it rewarding?)
- Backtest: did “high risk” areas actually produce incidents?
- Use AI to propose priorities, not to declare truth
A solid mantra here: AI should accelerate judgment, not replace it.
A practical hiring lens: how to spot the right CISO for AI-era risk
If you’re hiring a CISO in 2026 planning cycles (which many teams are right now), don’t screen for “technical vs. non-technical.” Screen for systems thinking under adversarial pressure.
Interview questions that expose the archetype
Ask these and listen for end-to-end thinking:
- “Where does risk hide after you implement strong preventative controls?”
- “Walk me through how you secure CI/CD and artifact integrity for critical services.”
- “If prompt injection hits an internal AI agent, what fails first: model, tools, or permissions?”
- “What’s your incident response philosophy: stop the bleed fast, or preserve evidence first?”
- “What security metric do you not trust, and why?”
Engineering-only answers fixate on a control. Holistic answers map the entire chain: people, permissions, pipelines, monitoring, response.
What you should expect the CISO to build in the first 90 days
A holistic, AI-ready CISO will usually prioritize:
- Crown-jewel mapping (what truly must not fail)
- Identity hardening for humans and machines (especially CI/CD)
- Software supply chain integrity (signing, provenance, protected branches)
- AI governance for tool permissions (what models can call, with what scopes)
- Incident response rehearsals (tabletops that include AI and supply chain scenarios)
If the plan is “buy platform X and roll out policies,” you’re likely hiring for the engineer archetype.
The “AI-enhanced holistic” security program: a blueprint
A strong AI in cybersecurity program doesn’t mean “use more AI.” It means using AI where it creates clarity and speed, and insisting on resilience where AI can’t save you.
Prevention (engineering strength), but not as the whole strategy
Keep the engineering rigor:
- least privilege
- secure defaults
- hardened identity
- segmentation
- secrets management
But treat it as table stakes.
Detection and response (AI for speed and triage)
AI is legitimately useful here:
- anomaly detection across identity + endpoint + cloud logs
- alert clustering to reduce noise
- incident summarization and timeline generation
- guided investigations for junior analysts
This helps teams respond faster—especially when staffing is tight.
Resilience (the part teams underfund)
Resilience is where holistic CISOs earn their keep:
- tested containment paths
- kill switches and safe-mode operations
- immutable logging for critical systems
- rehearsed decision-making under pressure
If you’re building AI products, include AI-specific resilience controls:
- tool-call allowlists
- scoped tokens with short TTLs
- runtime policy enforcement for agent actions
- monitoring for prompt injection indicators and unusual tool usage
What this means for your org (and a simple next step)
If your security strategy is led purely as an engineering discipline, you’ll build impressive controls—and still get surprised by pipelines, permissions, and human workflows. That’s the engineering-only trap.
The better path is an AI-ready, holistic CISO approach: strong technical foundations, clear business risk framing, and a resilience plan that assumes adversaries will find a way around your favorite controls.
If you’re trying to modernize security operations with AI, start with a practical exercise: pick one “unbreakable” control in your environment and map the top five ways an attacker would route around it—through CI/CD, identity, configuration, or approval workflows. Then ask where AI can help you detect that reroute early.
Which part of your environment is most likely to fail first: the control, the pipeline that ships it, or the people authorized to bypass it?