AI Threat Detection for 0-Days, OAuth Scams, RCE

AI in Cybersecurity••By 3L3C

AI threat detection helps stop zero-days, OAuth scams, and RCE faster. Learn a practical playbook to reduce exposure and automate response.

AI threat detectionZero-day responseOAuth securityVulnerability managementSecurity automationIncident response
Share:

Featured image for AI Threat Detection for 0-Days, OAuth Scams, RCE

AI Threat Detection for 0-Days, OAuth Scams, RCE

Most security teams treat “patching” and “phishing training” as separate problems. This week’s incident mix shows why that mental model fails.

We’ve got actively exploited Apple and Chrome zero-days, a WinRAR path traversal flaw pulled into the Known Exploited Vulnerabilities (KEV) catalog, a .NET behavior (“SOAPwn”) that can turn file writes into RCE, and a new OAuth copy‑paste trick (ConsentFix) that dodges some of the endpoint signals defenders rely on. Layer on top the LastPass fine tied to its 2022 breach, and the pattern is blunt: attackers are winning by moving faster than human-scale processes.

This post is part of our AI in Cybersecurity series, and I’m going to take a stance: AI-driven threat detection and security automation aren’t “nice to have” anymore—without them, your exposure window is whatever your slowest manual workflow is. The goal here isn’t to rehash headlines. It’s to translate them into an operating plan you can run next week.

What this week’s exploits have in common

The shared thread is speed and ambiguity: defenders have incomplete information while attackers already have working playbooks.

  • Zero-days (Apple/WebKit and Chrome/ANGLE): minimal public details, high-impact, often “highly targeted,” which makes rule-based detection brittle.
  • “Normal” tools used in abnormal ways: .NET HTTP client proxies accepting file:// paths; OAuth auth codes exposed in a localhost URL; calendar subscriptions used as delivery rails.
  • Opportunistic exploitation after disclosure: React-related exploitation surging is a reminder that public proof-of-concepts shorten time-to-weaponization.

Here’s the practical takeaway: your detection program has to generalize. It can’t depend on a CVE name showing up in an IDS signature update.

AI helps when it’s applied to the right layer:

  1. Behavioral detection (what the system is doing, not what the exploit is called)
  2. Identity and session anomaly detection (OAuth and SSO abuse is an identity problem first)
  3. Automation to compress mean-time-to-patch and mean-time-to-contain

Zero-days: why “highly targeted” still hits enterprises

Two WebKit zero-days—CVE-2025-14174 (memory corruption) and CVE-2025-43529 (use-after-free)—were patched across Apple platforms, and one issue also affected Chrome through the shared ANGLE component.

“Highly targeted” gets misunderstood. Teams hear it and think “not us.” The real risk is that targeted chains become commoditized fast:

  • A commercial spyware chain today becomes an e-crime loader tomorrow.
  • Even before that happens, executives and admins are high-value targets, and they’re inside your network.

Where AI-driven threat detection fits for zero-days

AI doesn’t magically “know” a new CVE. What it can do well is flag rare or suspicious behavior that often accompanies exploitation.

Examples of high-signal detections that matter even when the exploit is unknown:

  • Browser-to-child-process anomalies: Safari/Chrome spawning unusual child processes, script hosts, or shell interpreters.
  • New persistence right after web browsing: Launch agents, scheduled tasks, unusual login items.
  • Credential access attempts after a browser crash/restart: token theft patterns and keychain access attempts.

If you’re evaluating AI for cybersecurity, ask a blunt question: Can it correlate browser telemetry, endpoint events, and identity signals into one incident without my analyst stitching it together? If the answer is “no,” you’re buying a dashboard, not a capability.

RCE isn’t always “a vuln”—sometimes it’s a design surprise

The .NET issue labeled SOAPwn is a good example of why defenders need to hunt for classes of failure, not just CVEs.

The core behavior: some .NET HTTP client proxy implementations can accept non-HTTP URLs (like filesystem paths). Under the wrong conditions, a SOAP request that was supposed to be sent over HTTP can be written to a local path instead.

That can cascade into:

  • Arbitrary file write → webshell drop
  • Script drop (PowerShell) → code execution
  • NTLM challenge capture/relay in certain flows

How AI helps here (and what it shouldn’t pretend to do)

AI won’t fix risky defaults in application frameworks. It can help you spot the exploitation path earlier by detecting:

  • Unexpected file writes by web app worker processes (especially to web roots or scriptable directories)
  • New .aspx, .jsp, .php, or script artifacts created by service accounts
  • Outbound authentication attempts (NTLM/kerberos anomalies) that don’t match baseline

If you run a lot of .NET, this week is a reminder to add an “AI-assisted” control in AppSec and SecOps: baseline what “normal writes” look like for your web tiers, then alert hard on deviations.

WinRAR exploitation: the patch is necessary, not sufficient

CVE-2025-6218 (WinRAR path traversal, CVSS 7.8) being actively exploited by multiple groups—and landing in CISA’s KEV—is exactly what “commodity exploitation” looks like.

Path traversal in archive utilities is a repeat offender because:

  • users trust archives (“it’s just a file”),
  • extraction happens on endpoints outside hardened server controls,
  • payload placement can be stealthy (dropping into Startup folders, user profile paths, or app directories).

AI’s role: stop treating patching as a calendar event

Many teams still patch endpoints in waves that assume a stable threat landscape. That assumption is wrong. When something hits KEV and exploitation is active, patching becomes an incident response task.

AI-enabled security automation can compress that response by:

  • Auto-triaging vulnerability exposure based on real software inventory and usage
  • Prioritizing by exploit activity + asset criticality (not just CVSS)
  • Triggering compensating controls when patching can’t happen immediately (restrict archive extraction from email, block common delivery file types, tighten execution policies)

A simple rule I’ve found works: KEV + active exploitation means your SLA is measured in days, not weeks. If your tooling or approvals can’t hit that, you need automation.

OAuth scams (ConsentFix): the attack moved into the browser

The new ConsentFix technique is a nasty evolution of copy/paste social engineering. Instead of dropping malware, the attacker tricks a user into:

  1. logging in through a legitimate flow that generates an OAuth authorization code,
  2. copying a localhost URL that contains that code,
  3. pasting it into the attacker’s page.

The attacker ends up with OAuth material that can translate into account takeover—often without the “classic” signs of credential theft.

What AI can catch that traditional tools often miss

This is where AI for fraud prevention and anomaly detection earns its keep. You want models that detect impossible or unlikely identity sequences, like:

  • OAuth grants from a user who never uses developer tooling, followed by new app consents
  • Sudden token use from new geographies / ASNs / device fingerprints
  • Session patterns that don’t match the user’s baseline (time of day, client type, flow sequence)

Practical defenses that pair well with AI detection:

  • Phishing-resistant MFA for privileged users and high-risk apps
  • Tighter consent policies (limit who can grant which scopes)
  • Continuous access evaluation to revoke sessions when risk spikes

One sentence that belongs in every security awareness program now: “Never paste a URL or ‘code’ from a login flow into a webpage that asked you to.”

LastPass fine: a real-world price tag on weak security controls

The U.K. ICO fine against LastPass’s British subsidiary ties back to the 2022 breach path: developer laptop compromise → access to development environment → later compromise of a DevOps engineer’s machine → keylogger → master password theft → cloud storage breach.

You don’t need the exact initial infection vector to learn the lesson:

  • Endpoints used for development and admin work are crown jewels.
  • Weak monitoring and delayed containment are multipliers.

Where AI-driven security operations actually changes outcomes

This is the part vendors love to overpromise, so here’s the realistic framing.

AI helps most when it’s applied to:

  • Early lateral movement detection (new tooling, unusual remote access, privilege escalation attempts)
  • Credential theft indicators (keylogger-like behaviors, suspicious browser credential access, token dumping)
  • Automated containment (isolate host, revoke tokens, rotate secrets, block suspicious app consents)

If your incident timeline still depends on someone noticing “something feels off” in a sea of alerts, you’re paying the same kind of long-tail cost that fines and breach response create.

A practical “next week” playbook (AI + automation)

If you want a concrete plan that matches the threats in this recap, run these steps in order.

1) Build a single priority queue for patching and exposure

You need one list, not three spreadsheets.

Prioritize using:

  • Active exploitation (KEV, vendor statements, observed scanning)
  • Asset criticality (identity systems, devops tooling, internet-facing apps)
  • Reachability (is the vulnerable component actually exposed or used?)

AI can assist by correlating inventory, exposure paths, and exploit signals into a ranked backlog.

2) Baseline normal identity flows—then alert on weird sequences

OAuth and AitM phishing are identity-first attacks.

Start with:

  • typical login methods (browser vs CLI),
  • normal consent/app patterns,
  • common device and network traits.

Then alert on:

  • new OAuth consents with high-privilege scopes,
  • token replay patterns,
  • anomalous SSO flow chains.

3) Treat “file write from web tier” as a high-severity event

SOAPwn-style abuse and many RCE chains converge here.

High-signal detections:

  • web worker processes writing executables/scripts,
  • web directories receiving new files outside deployment pipelines,
  • suspicious PowerShell spawned by service accounts.

4) Automate the first 30 minutes of response

Speed matters more than elegance.

Your automation should be able to:

  1. isolate a suspicious endpoint,
  2. revoke sessions and refresh tokens,
  3. rotate exposed secrets,
  4. create a ticket with enriched context (process tree, identity timeline, affected assets).

If you can’t do those four things quickly, you’re letting attackers keep the initiative.

People also ask: “Can AI really detect zero-days?”

Yes—if you mean detecting the behavior around exploitation, not the specific vulnerability.

AI is effective at:

  • anomaly detection across endpoint + network + identity signals,
  • identifying rare process chains and privilege changes,
  • spotting account takeover patterns from session telemetry.

AI is not effective as a standalone control for:

  • fixing insecure defaults,
  • replacing patching,
  • compensating for missing logs and weak identity policies.

The reality? AI makes good teams faster. It doesn’t make uninstrumented environments safe.

What to do before everyone goes offline for the holidays

December is when coverage thins out, approvals slow down, and attackers press harder. If you only do three things before the year ends, do these:

  1. Patch anything actively exploited (especially KEV-listed items and browser/OS updates) and verify completion.
  2. Harden OAuth and SSO flows: phishing-resistant MFA for admins, restrict consent scopes, and monitor new grants.
  3. Turn on automated containment for high-confidence identity takeover and web-tier file-write detections.

If you’re investing in AI in cybersecurity for 2026, make your evaluation criteria simple: Does this reduce my exposure window, and does it reduce my time-to-contain? If it doesn’t move those numbers, it’s not helping when the next week looks like this one.

What part of your security stack still relies on “someone noticing” before anything happens—patching, identity, endpoints, or cloud?

🇺🇸 AI Threat Detection for 0-Days, OAuth Scams, RCE - United States | 3L3C