Learn how to think like an attacker using AI-driven threat modeling, predictive analysis, and SOC automation—practical steps to harden defenses and reduce risk.
Think Like an Attacker (With AI): CISO-Style Defense
Most security programs don’t fail because the team is “bad at security.” They fail because they defend their org chart, not their attack surface.
Etay Maor, chief security strategist at Cato Networks and a cybersecurity professor, puts it bluntly: defenders often fall into checklist thinking. Attackers don’t. Attackers run campaigns. They test, adapt, and use whatever works—technical exploits, social engineering, physical access, and open-source breadcrumbs you didn’t even realize were public.
This post is part of our AI in Cybersecurity series, and I’ll take Maor’s “think like an attacker” advice one step further: AI is now the most practical way to operationalize attacker thinking at enterprise scale. Not because AI replaces security teams (it doesn’t), but because it helps you simulate adversary behavior, sift signal from noise, and move faster than human-only workflows.
“Thinking like an attacker” means escaping checklist security
Answer first: Thinking like an attacker means prioritizing paths to impact, not controls on a spreadsheet.
A checklist approach asks: “Do we have MFA?” “Do we have EDR?” “Did we run awareness training?” An attacker approach asks: “How do I get money, data, or control—fast—and what will stop me in this specific environment?”
Maor teaches nontechnical students to reason like threat actors—because if you only teach tools, you create tool-dependent defenders. The better habit is to model the attacker’s workflow:
- Recon: What can I learn without touching the target?
- Initial access: Who’s easiest to trick or which system is easiest to break?
- Execution + persistence: How do I keep access and avoid detection?
- Privilege escalation + lateral movement: How do I reach crown jewels?
- Actions on objectives: Exfiltrate, encrypt, fraud, extort, disrupt.
Here’s the uncomfortable truth: most organizations are “secure” in the abstract and vulnerable in the specific. The specific is where attackers live.
AI makes attacker-style defense realistic (and repeatable)
Answer first: AI helps defenders “think like attackers” by automating recon analysis, predicting likely attack paths, and accelerating investigation and response.
You can’t run a full adversary simulation every week with a small team. You also can’t manually analyze every log line, alert, and anomaly when your environment changes daily. This is where AI-driven cybersecurity earns its keep.
AI-driven threat modeling: from static diagrams to living models
Traditional threat modeling often becomes a once-a-year workshop and a stale diagram. AI can change that by continuously mapping what matters:
- Asset discovery and relationship mapping (services, identities, data flows)
- Exposure analysis (internet-facing services, misconfigurations, leaked credentials)
- Likely attack paths based on real-world tactics and your environment
When you treat threat modeling as a living process, you stop debating theoretical risks and start prioritizing what an attacker would actually exploit next.
Predictive analysis: prioritizing what’s most likely to be hit
Attackers are opportunistic. They go where conditions are favorable: weak identity controls, overprivileged accounts, exposed admin panels, brittle third-party integrations.
AI can support risk-based prioritization by correlating:
- vulnerability data
- exploit activity and attacker chatter patterns (where available internally)
- observed misconfigurations
- identity and privilege posture
- historical incident patterns
The output you want isn’t “more alerts.” It’s a short list that reads like an attacker’s plan: “These 12 things are most likely to get you breached this quarter.”
Security operations automation: shrinking the time-to-truth
If your SOC is buried, “think like an attacker” becomes aspirational. AI helps by accelerating the boring but critical parts of the workflow:
- Alert triage (dedupe, clustering, severity scoring)
- Entity behavior baselining (users, devices, service accounts)
- Investigation summarization (“What happened? What changed? What’s affected?”)
- Response guidance (containment steps, playbook selection)
The goal isn’t hands-off security. It’s fewer dead-end investigations and faster containment when the signal is real.
The hard-hat lesson: attackers blend digital, social, and physical
Answer first: The best attacker thinking combines technical exploits with social engineering and real-world access tricks.
Maor’s memorable line from his teaching: one of the best hacking tools is a hard hat and a yellow vest. That’s not a joke. It’s a reminder that security failures often happen at boundaries—front desks, vendors, shared workspaces, shared inboxes, shared credentials.
AI helps here too, but not in the way most people expect. It won’t magically stop someone from walking into a building. What it can do is support layered defenses that assume humans will be human.
AI-enhanced social engineering defense (without blaming employees)
Security awareness that ends at “don’t click links” is outdated. Modern programs pair training with controls and detection.
Practical ways AI supports this:
- Phishing detection and classification that adapts to new lures
- Language-pattern analysis for business email compromise indicators (tone shifts, urgency cues, payment redirection)
- Outbound anomaly detection for unusual invoice changes, bank detail edits, or mass forwarding rules
A good stance: treat social engineering as a systems problem, not a morality play about careless users.
OSINT reality check: your “public” data is usually more public than you think
Maor shared a classroom OSINT project where a student mapped social relationships using Venmo transactions—because many people never changed privacy settings.
Every organization should assume attackers do OSINT first. AI can help you operationalize “OSINT hygiene” by:
- monitoring for leaked credentials and impersonation signals
- detecting exposed sensitive documents or misindexed cloud storage
- flagging risky public metadata patterns (employee role + tech stack + vendor names)
If you want a quick win before Q1 planning kicks off: run an internal OSINT sprint (marketing, HR, security, legal together) and see what a motivated attacker can learn in 60 minutes.
How to build an AI-enabled “attacker mindset” program
Answer first: Build an attacker-mindset program by combining adversary simulation, AI-assisted detection engineering, and a monthly feedback loop from incidents and near-misses.
Most companies try to buy their way into attacker thinking. Tools help, but the program matters more. Here’s a structure I’ve found works because it’s repeatable.
1) Pick 3 attacker plays that match your business
Choose scenarios based on how you make money and how you’d lose it. Examples:
- Ransomware with identity takeover
- Vendor invoice fraud (BEC)
- Data theft from a SaaS misconfiguration
Write each play as a one-page attacker plan: entry point, privileges needed, likely logs, and impact.
2) Use AI to turn plays into detection hypotheses
For each play, define what “we’d probably see”:
- anomalous sign-ins (new geography, impossible travel, new device)
- privilege changes (role assignment spikes, new OAuth app consents)
- suspicious data access (unusual downloads, mass exports)
- lateral movement signals (remote management tools, new service creation)
Then use AI to help:
- generate candidate detections and queries
- cluster historical alerts into patterns
- summarize detection gaps (“we don’t log X” or “we can’t attribute Y”)
The value is speed. You still need engineers to validate, tune, and avoid noisy detections.
3) Red-team the workflows, not just the perimeter
A lot of testing focuses on whether you can pop a box. Attackers care whether you can:
- approve a fraudulent payment
- reset an executive’s credentials via help desk
- grant access to a sensitive dataset with a convincing pretext
Run tabletop + light technical simulations that include finance, HR, legal, IT, and comms. If the only people in the room are security folks, you’re missing the real blast radius.
4) Build a monthly “attacker learning loop”
Attackers iterate. Defenders should too.
Once a month:
- review 1 incident, 1 near-miss, and 1 “we got lucky” event
- ask: what would have made this faster to detect?
- update one detection, one control, and one process
AI helps keep this lightweight by summarizing incident timelines and highlighting recurring patterns.
People also ask: where does AI fit without creating new risk?
Answer first: Use AI where it reduces toil and improves decisions, and put guardrails around data exposure, model access, and automated actions.
Should we let AI take automated response actions?
Yes—selectively. Start with low-risk actions (tagging, deduping, enriching, ticket creation). Graduate to containment (isolating endpoints, disabling accounts) only after you’ve proven accuracy and defined rollback.
Won’t attackers use AI too?
They already do. The point isn’t to “ban AI,” it’s to out-operate them: faster detection engineering, tighter identity controls, better anomaly detection, and quicker response.
What’s the first AI use case that actually helps?
If you’re drowning in alerts, start with AI triage and investigation summarization. If you’re getting hit with fraud or BEC, start with transaction and communication anomaly detection.
Where this goes next: AI-assisted defense is becoming table stakes
AI in cybersecurity is shifting from “nice to have” to “if you don’t, you’re slower.” Maor’s core lesson—think like an attacker—lands differently in 2025 because attackers are industrialized and patient. The defenders that win are the ones that can run attacker logic continuously, not once a year.
If you’re planning your 2026 security roadmap right now, pick one place where you’ll stop doing checklist security and start doing attacker-path security. Then use AI to scale it: faster threat modeling, better prioritization, and shorter investigations.
What would change in your program if every control had to answer one blunt question: “How does this stop the attacker’s next move?”