AI Defense Against BlueDelta-Style Credential Phishing

AI in Cybersecurity••By 3L3C

BlueDelta’s UKR.NET phishing shows how APTs steal passwords and 2FA codes. Learn how AI detection stops credential theft with behavior-based defense.

credential phishingAPT28threat intelligenceidentity securityAI security analyticsincident response
Share:

Featured image for AI Defense Against BlueDelta-Style Credential Phishing

AI Defense Against BlueDelta-Style Credential Phishing

BlueDelta (aka APT28/Fancy Bear/Forest Blizzard) didn’t need malware to cause damage in its latest wave of activity. Between June 2024 and April 2025, investigators identified a sustained credential-harvesting campaign aimed at UKR.NET users, using fake login pages and carefully layered redirection infrastructure. The payload was simple: usernames, passwords, and 2FA codes.

Most security teams still treat phishing as an “awareness problem” and credential theft as an “identity team problem.” That split is exactly what state-sponsored actors count on. The reality is that modern credential-harvesting campaigns behave like a system: email delivery, web redirection, impersonated portals, proxy tunnels, and abnormal login behavior. AI in cybersecurity matters here because it’s one of the few approaches that can watch the whole system at once—continuously—and react fast enough to interrupt it.

This post breaks down what made this campaign effective, why it’s a blueprint you should expect to see again in 2026, and how to use AI-powered threat detection to spot credential theft before it turns into account takeover and espionage.

What BlueDelta’s UKR.NET campaign tells us (and why it’s scary)

Answer first: BlueDelta’s campaign shows that well-funded attackers can run long-lived phishing operations by abusing free web services and proxy tunneling, staying agile when infrastructure gets taken down.

The operation targeted a widely used Ukrainian webmail and news service (UKR.NET) with fake login portals designed to collect:

  • Usernames and passwords
  • CAPTCHA responses
  • Two-factor authentication codes
  • In at least one iteration, even victim IP addresses (useful for profiling and filtering)

What stands out isn’t a single clever trick—it’s persistence and operational discipline. Investigators identified 42 credential-harvesting chains and observed infrastructure updates over time, including new tiers and adjustments to evade warnings and detections.

If you’re defending an enterprise (or a public-sector environment), the lesson is blunt: credential phishing is no longer a bursty, opportunistic activity. It’s an ongoing access pipeline.

Why PDFs were the perfect lure

Answer first: PDFs let attackers move the “malicious click” out of the email body, reducing the chances that secure email gateways and sandboxes fully detonate the link chain.

Rather than sending a naked phishing link, this campaign relied heavily on PDF lures claiming suspicious account activity and prompting a password reset. That’s effective for three reasons:

  1. User psychology: a “security alert” creates urgency and compliance.
  2. Email control gaps: many organizations inspect URLs in email bodies more aggressively than URLs embedded in attachments.
  3. Sandbox friction: some sandboxes don’t follow embedded PDF links deeply, especially through multi-hop redirects and shorteners.

Security teams often respond by tightening attachment policies. That helps, but it’s not enough by itself—because your users still need PDFs to do their jobs.

The infrastructure pattern: cheap services, high sophistication

Answer first: BlueDelta’s chain used a multi-step architecture combining free hosting, link shorteners, and proxy tunnels to hide the true collection server and bypass common defenses.

What makes campaigns like this hard to stop is their modular design. According to the research, the credential-harvesting setup commonly included multiple tiers such as:

  • Tier 1: lure delivery (phishing email with PDF)
  • Tier 2: redirectors and/or hosted phishing page (often on free web services)
  • Tier 3: reverse proxy / tunneling layer (to obscure where data is really going)
  • Tier 4: dedicated servers that ultimately receive credentials

A key operational shift was the move from compromised routers (used earlier for credential capture and 2FA handling) to proxy tunneling platforms like ngrok and Serveo.

That’s not a random choice. Proxy tunneling services offer:

  • Fast infrastructure rotation
  • A layer of anonymity/indirection
  • Less reliance on owning compromised edge devices
  • Easy exposure of upstream services without traditional hosting footprints

The ngrok “browser warning” bypass is a tell

Answer first: Attackers deliberately modified their phishing code to suppress safety warnings, which is a behavioral signal defenders can model.

One iteration added a specific request header to disable an ngrok browser warning page:

  • ngrok-skip-browser-warning: 1

From a defender’s perspective, this is gold. It’s a stable behavioral marker you can hunt for in web proxy logs, secure browser telemetry, or network sensors. Better yet, it’s the kind of weak signal AI models can correlate with other context (PDF click → redirect chain → suspicious header → new domain) to elevate confidence quickly.

Where AI fits: detecting credential theft as a system, not an event

Answer first: AI helps most when it correlates low-signal artifacts across email, endpoint, network, and identity—then automates a safe response.

Traditional defenses often fail here because each control only sees a slice:

  • Email security sees a PDF and a “benign” link shortener.
  • DNS security sees a newly registered domain that doesn’t match known bad lists yet.
  • Web gateways see a user visiting a legitimate free-service domain.
  • Identity systems see a login that looks valid—until it’s too late.

AI-powered security analytics can connect those slices. Specifically, the best outcomes come from models that combine:

  • Content signals: “account verification,” “password reset,” “suspicious activity” language in lures
  • Infrastructure signals: new/rare domains, free hosting providers, tunneling endpoints, nonstandard ports
  • Sequence signals: user opens PDF → clicks embedded URL → lands on lookalike login → immediate login attempt to real service
  • Identity signals: impossible travel, new ASN/proxy logins, unusual device fingerprints, repeated 2FA failures

A simple but effective stance: treat credential-harvesting as a funnel. AI’s job is to spot the funnel forming.

AI-powered anomaly detection for logins: what to model

Answer first: You’ll catch more account takeovers by modeling “normal login behavior” and flagging deviations than by chasing every new phishing domain.

For state-sponsored phishing, domain lists are always behind. Behavioral detection isn’t. Here’s what I’ve found works well in practice:

  • First-time seen combinations: user + device + location + ASN + client app
  • 2FA friction patterns: repeated prompts, multiple invalid codes, sudden “push fatigue” behavior
  • Proxy/tunnel indicators: access coming from hosting ranges associated with tunneling services or from unusual high ports
  • Time-of-day drift: logins outside a user’s established working window, especially paired with sensitive mailbox access
  • Post-login actions: rapid mailbox search/export rules, forwarding setup, OAuth consent attempts

The campaign’s focus on capturing 2FA codes is a reminder: MFA is necessary, but not sufficient. If your MFA method is phishable, attackers will phish it.

Automated response: the fastest safe actions

Answer first: The best automated playbooks are reversible and user-friendly: isolate the session, reset risk, and verify identity.

If your AI detection flags a likely credential-harvesting event, automation should aim to stop the blast radius without causing a week-long outage:

  1. Step-up auth immediately (prefer phishing-resistant methods where possible)
  2. Kill active sessions for the impacted identity
  3. Temporary conditional access lock (geo/ASN/device) until verification
  4. Force password reset and revoke refresh tokens
  5. Mailbox rule audit and auto-remediation (forwarding rules are a favorite)

This is where 24/7 monitoring matters. BlueDelta ran this campaign for months. Human-only monitoring doesn’t scale against that tempo.

Practical defenses you can deploy this quarter

Answer first: Combine phishing-resistant authentication, deny-lists for unnecessary free services, and AI-driven correlation across email/web/identity.

You don’t need a moonshot program to reduce your exposure. A solid quarter’s work can make campaigns like this far less profitable.

1) Tighten identity controls where it counts

  • Prioritize phishing-resistant MFA for admins, finance, IT support, and executives.
  • Block legacy authentication paths.
  • Enforce conditional access rules based on device posture and risk.

If you only do one thing: separate “mailbox access” from “high-risk mailbox actions.” Reading an email might be allowed with moderate risk; creating forwarding rules shouldn’t be.

2) Treat tunneling and free hosting as “high-risk by default”

The campaign abused free services and tunneling platforms to hide infrastructure and rotate quickly. If your business doesn’t need these services, deny-listing is a practical move.

  • Block or heavily monitor traffic to free tunneling providers
  • Alert on access to free hosting platforms used for quick phishing page deployment
  • Watch for connections to uncommon ports associated with relays

The goal isn’t to ban the internet. It’s to remove attacker-friendly paths you don’t actually require.

3) Upgrade PDF and link inspection (without breaking work)

  • Detonate PDFs in a controlled environment and extract embedded URLs for deeper inspection.
  • Add policy for “security-themed PDFs” that request authentication actions.
  • Use browser isolation for risky click paths (especially from attachments).

4) Make AI useful: feed it the right signals

AI detection improves when it has consistent telemetry. If you want better outcomes:

  • Centralize identity logs, web proxy logs, endpoint events, and email telemetry
  • Normalize fields like ASN, device ID, user agent, and redirect chains
  • Add threat intelligence enrichment for newly registered domains and known abused services

AI isn’t magic. It’s pattern recognition. Give it patterns.

People also ask: “If we already have MFA, why does this still work?”

Answer first: It works because attackers phish the MFA step in real time, capturing codes or pushing victims into approving prompts.

The campaign specifically aimed to collect two-factor authentication codes, meaning the phishing flow was built to intercept more than just the password. That’s why phishing-resistant methods (hardware keys, passkeys, device-bound authenticators) matter. They make the attacker’s “fake login portal” model collapse.

What to do next (and where AI in cybersecurity is headed)

BlueDelta’s UKR.NET operation is a clean illustration of modern credential theft: low-cost infrastructure, high operational maturity, and constant iteration when defenders take things down. Expect more of this in 2026—not just from Russian state-aligned groups, but from any actor who learns the pattern.

If your security stack still treats phishing, web security, and identity as separate lanes, you’re forcing analysts to do manual correlation at the exact moment attackers are automating. AI in cybersecurity is the practical counterweight: it correlates signals fast and triggers containment before stolen credentials turn into persistent access.

If you want to pressure-test your environment against BlueDelta-style credential harvesting, start with a simple question: Could your team detect a user clicking a PDF link to a lookalike login page—and stop the session before the attacker uses the captured 2FA code?