AI in cybersecurity works best when it mirrors attacker behavior. Learn how to train teams, model real attack paths, and apply AI where it improves detection and response.

Think Like an Attacker: AI-Driven Security Training
Most companies still train defenders like they’ll only ever face yesterday’s attacks: checklists, annual modules, and tool-specific drills. Attackers don’t work that way. They iterate daily, borrow tactics from other crews, and use automation (including AI) to scale what used to be “handcrafted” crime.
Etay Maor, chief security strategist at Cato Networks and a cybersecurity professor, has a blunt way of describing the gap: defenders are often excellent at defense, but many don’t think like attackers. That one idea connects neatly to the bigger theme in this AI in Cybersecurity series: AI helps you process more signals faster—but if you don’t model the attacker’s path, you’ll automate the wrong things.
What follows is an attacker-minded approach to modern defense—plus how to use AI in security operations and training without turning it into expensive busywork.
Thinking like an attacker beats “checklist security”
Thinking like an attacker is the shortest path to better prioritization. It forces a simple question: “If I were breaking in, what’s the easiest, quietest route that pays off?” When you ask that, a lot of “security theater” falls away.
Checklist security tends to overweight controls that are easy to audit and underweight controls that break real intrusions. Attackers love that. They don’t need perfect exploits when they can:
- Reuse leaked credentials from old breaches
- Convince someone to approve an MFA prompt
- Abuse public data (OSINT) to craft believable pretexts
- Target overlooked identity and access paths (service accounts, OAuth tokens, API keys)
Maor teaches nontechnical students how adversaries plan operations—because offensive thinking isn’t about becoming a hacker. It’s about developing the habit of tracing cause → effect in a real environment.
Snippet-worthy truth: If your security program can’t describe a likely attacker path in plain language, it’s not a program—it’s a pile of controls.
The “yellow vest” lesson: attackers aren’t only digital
One of Maor’s sharpest examples is physical-social crossover: a hard hat and a yellow vest can get someone into places they don’t belong. That same pattern shows up in enterprise attacks constantly:
- “Facilities vendor” becomes access to an unattended workstation
- “New contractor” becomes a reason to request SharePoint permissions
- “Urgent CFO request” becomes a wire transfer or a password reset
AI won’t fix this by itself. But AI can help you model and stress-test these human workflows at scale—if you treat them as first-class attack surfaces.
Where AI actually helps: modeling the attacker’s workflow
AI in cybersecurity works best when it supports the analyst’s decision-making loop: collect context, find anomalies, test hypotheses, and respond quickly. The trap is using AI to generate more alerts, more tickets, and more noise.
Here are attacker-aligned, high-ROI places to apply AI security tools.
1) OSINT risk mapping (your org through an attacker’s eyes)
Attackers start with free information: job posts, Git repos, vendor pages, social profiles, public payment apps, and cached documents. Maor’s classroom OSINT exercise is a good reminder that the organization’s public exhaust often reveals:
- Naming conventions (useful for brute-force and phishing)
- Tech stack and tools (helpful for choosing payloads)
- Org charts and approval chains (useful for social engineering)
- Relationships and communities (useful for pretexting)
How AI improves this: use AI to cluster and summarize OSINT findings into attacker-friendly “opportunity maps.” Instead of a spreadsheet of 500 open items, you get themes like:
- “Third-party support emails likely accept attachments”
- “Multiple employees advertise admin experience in tool X”
- “Public documents expose internal hostnames and meeting links”
That output is directly actionable for security awareness, identity hardening, and vendor risk management.
2) Phishing and social engineering detection (beyond keywords)
Modern social engineering is less about typos and more about timing and context. The best phishing looks like internal workflow.
AI-driven threat detection can help by correlating:
- Sender reputation + domain age + lookalike patterns
- Message intent (invoice, password reset, HR request)
- User behavior (sudden rule creation, unusual file sharing)
- Identity signals (impossible travel, token anomalies, MFA fatigue patterns)
This matters because the attacker’s path often hinges on a single human action. Your tools should focus on preventing the one action that turns a “phish attempt” into “account takeover.”
3) Threat intel that drives decisions, not decks
Maor helped build a threat research function that combines research groups into a knowledge base designed to educate and protect customers. That’s the right direction: threat intelligence should change what you do this week.
AI can help threat intelligence by:
- Normalizing indicators, TTP descriptions, and campaign notes into a searchable internal knowledge base
- Mapping observed behaviors to likely next steps (credential theft → OAuth abuse → mailbox rules → internal phishing)
- Generating “if-then” playbook recommendations that align to your environment
If your threat intel output doesn’t result in changed detections, hardened identity controls, or rehearsed incident response steps, it’s not operational—it’s marketing.
Practical attacker-mindset drills (that pair well with AI)
You don’t need a full red team to build attacker empathy. You need repeatable drills tied to real systems and real people.
Drill 1: “One credential gets popped—now what?” (identity attack path)
Answer first: map what an attacker can do with a single compromised user account and where your detection and containment break.
Run a tabletop (or purple-team) exercise around:
- Initial access: password reuse or stolen session cookie
- Persistence: mailbox rules, OAuth grants, token refresh
- Privilege escalation: help desk reset workflow, stale group memberships
- Lateral movement: shared drives, internal apps, VPN, SaaS
- Impact: data exfiltration or ransomware staging
Where AI fits: use AI to summarize identity logs and highlight unusual sequences (for example, new OAuth consent followed by mass file downloads and forwarding rules). Then turn that into a detection story your SOC can actually use.
Drill 2: “OSINT to pretext in 60 minutes” (social attack path)
Answer first: measure how quickly someone can craft a believable request using only public data.
Have a cross-functional team (security, HR, finance, IT) attempt to build a pretext using:
- Public job descriptions
- Vendor pages and press releases
- Social profiles and conference talks
- Any accidentally public docs
Deliverable: a one-page “attacker script” and the list of internal controls that should stop it (approval workflows, call-back procedures, out-of-band verification).
Where AI fits: use an internal AI assistant to generate multiple plausible pretexts, then train staff on how to spot the patterns (urgency, authority, secrecy) rather than memorizing examples.
Drill 3: “Hard hat test” (physical-to-digital crossover)
Answer first: validate whether physical presence can bypass digital controls.
This can be done safely and ethically with facilities and leadership buy-in. Evaluate:
- Visitor badge processes
- Tailgating resistance
- Screen-lock habits
- Printer and mailroom exposure
Where AI fits: use computer vision only where appropriate and lawful (many orgs won’t). More commonly, AI helps by organizing observations, linking them to policies, and producing a prioritized remediation list.
Building a modern cyber team: diversity of backgrounds is a security control
Maor’s experience teaching nontechnical students is a quiet indictment of how many security teams hire: too narrow, too tool-focused, too obsessed with perfect resumes.
A strong cyber program needs people who can:
- Investigate like a detective
- Communicate like a marketer during an incident
- Negotiate like a lawyer with regulators and vendors
- Model incentives like a fraud analyst
- Translate risk like a finance partner
I’ve found that teams that hire only for “10 years in X tool” end up with brittle security. They can operate dashboards, but they struggle to anticipate how attackers adapt.
AI changes the skills mix (and that’s good)
AI in security operations is steadily shifting entry-level work:
- Less time triaging obvious alerts
- More time validating ambiguous signals
- More time writing detections and response logic
- More time on cross-team coordination and containment
That means curiosity and clear thinking become even more valuable. If AI can draft a first-pass query or summarize logs, your differentiator is knowing what to ask next.
People also ask: “How do we use AI without making security worse?”
Answer first: treat AI outputs as hypotheses, not truths, and put guardrails around data exposure and automation.
Here’s a practical checklist that doesn’t devolve into checkbox theater:
- Define the decision AI supports. Example: “Should we disable this account?” not “analyze alerts.”
- Log and measure outcomes. Track false positives, time-to-triage, time-to-containment, and analyst overrides.
- Keep humans on high-impact actions. Disabling accounts, blocking vendors, or deleting emails should require review unless confidence is proven.
- Protect sensitive data. Restrict prompts, sanitize logs, and control what can be sent to AI systems.
- Continuously red-team the AI. Test for prompt injection, data leakage, and adversarial manipulation.
AI is a force multiplier for both sides. The defender advantage comes from process discipline and attacker-path thinking, not from buying another tool.
A better way to start (even if you’re behind)
If your program feels overwhelmed, start where attackers start: identity, email, and exposed information. Pick one attacker path and close it end-to-end.
A realistic 30-day plan:
- Week 1: Run an OSINT review and fix the top 10 exposures (public docs, stale repos, leaked credentials)
- Week 2: Map one “account takeover to data exfil” path and add two detections + one containment playbook
- Week 3: Run a social engineering tabletop with finance/HR and implement a call-back verification rule
- Week 4: Add AI-assisted log summarization and case notes to reduce analyst time on documentation
Progress is measurable: fewer successful phishes, shorter dwell time, faster containment, fewer “we didn’t know we had that exposed” surprises.
Security teams that do this well tend to look calmer in Q4—and that matters in December, when staffing is thin, change freezes are common, and attackers know response times slow down.
What to do next for AI-driven, attacker-minded defense
Thinking like an attacker isn’t a slogan. It’s a training method and a design constraint for your security program. Pair it with AI where it counts—OSINT triage, anomaly detection, threat intel normalization, and faster investigations—and you get a defense posture that improves every month instead of every audit cycle.
If you’re evaluating AI in cybersecurity right now, start by documenting your top three attacker paths, then ask a simple question: Which steps can AI help us detect earlier or respond to faster without increasing risk?
That answer will tell you whether you’re buying automation—or building resilience.