AI threat detection helps stop zero-days, OAuth scams, and RCE faster. Learn a practical playbook to reduce exposure and automate response.

AI Threat Detection for 0-Days, OAuth Scams, RCE
Most security teams treat âpatchingâ and âphishing trainingâ as separate problems. This weekâs incident mix shows why that mental model fails.
Weâve got actively exploited Apple and Chrome zero-days, a WinRAR path traversal flaw pulled into the Known Exploited Vulnerabilities (KEV) catalog, a .NET behavior (âSOAPwnâ) that can turn file writes into RCE, and a new OAuth copyâpaste trick (ConsentFix) that dodges some of the endpoint signals defenders rely on. Layer on top the LastPass fine tied to its 2022 breach, and the pattern is blunt: attackers are winning by moving faster than human-scale processes.
This post is part of our AI in Cybersecurity series, and Iâm going to take a stance: AI-driven threat detection and security automation arenât ânice to haveâ anymoreâwithout them, your exposure window is whatever your slowest manual workflow is. The goal here isnât to rehash headlines. Itâs to translate them into an operating plan you can run next week.
What this weekâs exploits have in common
The shared thread is speed and ambiguity: defenders have incomplete information while attackers already have working playbooks.
- Zero-days (Apple/WebKit and Chrome/ANGLE): minimal public details, high-impact, often âhighly targeted,â which makes rule-based detection brittle.
- âNormalâ tools used in abnormal ways: .NET HTTP client proxies accepting
file://paths; OAuth auth codes exposed in a localhost URL; calendar subscriptions used as delivery rails. - Opportunistic exploitation after disclosure: React-related exploitation surging is a reminder that public proof-of-concepts shorten time-to-weaponization.
Hereâs the practical takeaway: your detection program has to generalize. It canât depend on a CVE name showing up in an IDS signature update.
AI helps when itâs applied to the right layer:
- Behavioral detection (what the system is doing, not what the exploit is called)
- Identity and session anomaly detection (OAuth and SSO abuse is an identity problem first)
- Automation to compress mean-time-to-patch and mean-time-to-contain
Zero-days: why âhighly targetedâ still hits enterprises
Two WebKit zero-daysâCVE-2025-14174 (memory corruption) and CVE-2025-43529 (use-after-free)âwere patched across Apple platforms, and one issue also affected Chrome through the shared ANGLE component.
âHighly targetedâ gets misunderstood. Teams hear it and think ânot us.â The real risk is that targeted chains become commoditized fast:
- A commercial spyware chain today becomes an e-crime loader tomorrow.
- Even before that happens, executives and admins are high-value targets, and theyâre inside your network.
Where AI-driven threat detection fits for zero-days
AI doesnât magically âknowâ a new CVE. What it can do well is flag rare or suspicious behavior that often accompanies exploitation.
Examples of high-signal detections that matter even when the exploit is unknown:
- Browser-to-child-process anomalies: Safari/Chrome spawning unusual child processes, script hosts, or shell interpreters.
- New persistence right after web browsing: Launch agents, scheduled tasks, unusual login items.
- Credential access attempts after a browser crash/restart: token theft patterns and keychain access attempts.
If youâre evaluating AI for cybersecurity, ask a blunt question: Can it correlate browser telemetry, endpoint events, and identity signals into one incident without my analyst stitching it together? If the answer is âno,â youâre buying a dashboard, not a capability.
RCE isnât always âa vulnââsometimes itâs a design surprise
The .NET issue labeled SOAPwn is a good example of why defenders need to hunt for classes of failure, not just CVEs.
The core behavior: some .NET HTTP client proxy implementations can accept non-HTTP URLs (like filesystem paths). Under the wrong conditions, a SOAP request that was supposed to be sent over HTTP can be written to a local path instead.
That can cascade into:
- Arbitrary file write â webshell drop
- Script drop (PowerShell) â code execution
- NTLM challenge capture/relay in certain flows
How AI helps here (and what it shouldnât pretend to do)
AI wonât fix risky defaults in application frameworks. It can help you spot the exploitation path earlier by detecting:
- Unexpected file writes by web app worker processes (especially to web roots or scriptable directories)
- New
.aspx,.jsp,.php, or script artifacts created by service accounts - Outbound authentication attempts (NTLM/kerberos anomalies) that donât match baseline
If you run a lot of .NET, this week is a reminder to add an âAI-assistedâ control in AppSec and SecOps: baseline what ânormal writesâ look like for your web tiers, then alert hard on deviations.
WinRAR exploitation: the patch is necessary, not sufficient
CVE-2025-6218 (WinRAR path traversal, CVSS 7.8) being actively exploited by multiple groupsâand landing in CISAâs KEVâis exactly what âcommodity exploitationâ looks like.
Path traversal in archive utilities is a repeat offender because:
- users trust archives (âitâs just a fileâ),
- extraction happens on endpoints outside hardened server controls,
- payload placement can be stealthy (dropping into Startup folders, user profile paths, or app directories).
AIâs role: stop treating patching as a calendar event
Many teams still patch endpoints in waves that assume a stable threat landscape. That assumption is wrong. When something hits KEV and exploitation is active, patching becomes an incident response task.
AI-enabled security automation can compress that response by:
- Auto-triaging vulnerability exposure based on real software inventory and usage
- Prioritizing by exploit activity + asset criticality (not just CVSS)
- Triggering compensating controls when patching canât happen immediately (restrict archive extraction from email, block common delivery file types, tighten execution policies)
A simple rule Iâve found works: KEV + active exploitation means your SLA is measured in days, not weeks. If your tooling or approvals canât hit that, you need automation.
OAuth scams (ConsentFix): the attack moved into the browser
The new ConsentFix technique is a nasty evolution of copy/paste social engineering. Instead of dropping malware, the attacker tricks a user into:
- logging in through a legitimate flow that generates an OAuth authorization code,
- copying a localhost URL that contains that code,
- pasting it into the attackerâs page.
The attacker ends up with OAuth material that can translate into account takeoverâoften without the âclassicâ signs of credential theft.
What AI can catch that traditional tools often miss
This is where AI for fraud prevention and anomaly detection earns its keep. You want models that detect impossible or unlikely identity sequences, like:
- OAuth grants from a user who never uses developer tooling, followed by new app consents
- Sudden token use from new geographies / ASNs / device fingerprints
- Session patterns that donât match the userâs baseline (time of day, client type, flow sequence)
Practical defenses that pair well with AI detection:
- Phishing-resistant MFA for privileged users and high-risk apps
- Tighter consent policies (limit who can grant which scopes)
- Continuous access evaluation to revoke sessions when risk spikes
One sentence that belongs in every security awareness program now: âNever paste a URL or âcodeâ from a login flow into a webpage that asked you to.â
LastPass fine: a real-world price tag on weak security controls
The U.K. ICO fine against LastPassâs British subsidiary ties back to the 2022 breach path: developer laptop compromise â access to development environment â later compromise of a DevOps engineerâs machine â keylogger â master password theft â cloud storage breach.
You donât need the exact initial infection vector to learn the lesson:
- Endpoints used for development and admin work are crown jewels.
- Weak monitoring and delayed containment are multipliers.
Where AI-driven security operations actually changes outcomes
This is the part vendors love to overpromise, so hereâs the realistic framing.
AI helps most when itâs applied to:
- Early lateral movement detection (new tooling, unusual remote access, privilege escalation attempts)
- Credential theft indicators (keylogger-like behaviors, suspicious browser credential access, token dumping)
- Automated containment (isolate host, revoke tokens, rotate secrets, block suspicious app consents)
If your incident timeline still depends on someone noticing âsomething feels offâ in a sea of alerts, youâre paying the same kind of long-tail cost that fines and breach response create.
A practical ânext weekâ playbook (AI + automation)
If you want a concrete plan that matches the threats in this recap, run these steps in order.
1) Build a single priority queue for patching and exposure
You need one list, not three spreadsheets.
Prioritize using:
- Active exploitation (KEV, vendor statements, observed scanning)
- Asset criticality (identity systems, devops tooling, internet-facing apps)
- Reachability (is the vulnerable component actually exposed or used?)
AI can assist by correlating inventory, exposure paths, and exploit signals into a ranked backlog.
2) Baseline normal identity flowsâthen alert on weird sequences
OAuth and AitM phishing are identity-first attacks.
Start with:
- typical login methods (browser vs CLI),
- normal consent/app patterns,
- common device and network traits.
Then alert on:
- new OAuth consents with high-privilege scopes,
- token replay patterns,
- anomalous SSO flow chains.
3) Treat âfile write from web tierâ as a high-severity event
SOAPwn-style abuse and many RCE chains converge here.
High-signal detections:
- web worker processes writing executables/scripts,
- web directories receiving new files outside deployment pipelines,
- suspicious PowerShell spawned by service accounts.
4) Automate the first 30 minutes of response
Speed matters more than elegance.
Your automation should be able to:
- isolate a suspicious endpoint,
- revoke sessions and refresh tokens,
- rotate exposed secrets,
- create a ticket with enriched context (process tree, identity timeline, affected assets).
If you canât do those four things quickly, youâre letting attackers keep the initiative.
People also ask: âCan AI really detect zero-days?â
Yesâif you mean detecting the behavior around exploitation, not the specific vulnerability.
AI is effective at:
- anomaly detection across endpoint + network + identity signals,
- identifying rare process chains and privilege changes,
- spotting account takeover patterns from session telemetry.
AI is not effective as a standalone control for:
- fixing insecure defaults,
- replacing patching,
- compensating for missing logs and weak identity policies.
The reality? AI makes good teams faster. It doesnât make uninstrumented environments safe.
What to do before everyone goes offline for the holidays
December is when coverage thins out, approvals slow down, and attackers press harder. If you only do three things before the year ends, do these:
- Patch anything actively exploited (especially KEV-listed items and browser/OS updates) and verify completion.
- Harden OAuth and SSO flows: phishing-resistant MFA for admins, restrict consent scopes, and monitor new grants.
- Turn on automated containment for high-confidence identity takeover and web-tier file-write detections.
If youâre investing in AI in cybersecurity for 2026, make your evaluation criteria simple: Does this reduce my exposure window, and does it reduce my time-to-contain? If it doesnât move those numbers, itâs not helping when the next week looks like this one.
What part of your security stack still relies on âsomeone noticingâ before anything happensâpatching, identity, endpoints, or cloud?