AI Agent Phishing: A New Risk for Utilities

AI in Cybersecurity••By 3L3C

AI agent phishing turns email into a machine-targeted attack. Learn how utilities can secure AI assistants with pre-delivery detection and least-privilege controls.

AI securityUtilities cybersecurityEmail phishingPrompt injectionAgentic AISOC operations
Share:

AI Agent Phishing: A New Risk for Utilities

Most utilities are racing to deploy AI assistants in the places that matter: operations inboxes, outage coordination, vendor management, field dispatch, and customer communications. That’s exactly why attackers are shifting their attention from fooling humans to fooling the AI agents that work for humans.

Email phishing isn’t going away. It’s evolving. The new twist is prompt injection delivered through email—instructions that a person can’t see (or wouldn’t interpret as instructions), but an AI assistant can read and act on. For energy and utilities—where a single automated action can change access rights, reroute payments, or expose grid data—this is a bigger deal than most teams realize.

This post is part of our AI in Cybersecurity series, focused on a practical question: if you’re using AI agents in energy operations, how do you keep them from becoming an attacker’s easiest path into your business?

What is AI agent phishing (and why it works)

AI agent phishing is phishing designed to manipulate an AI assistant or agent into taking an unsafe action. Instead of persuading a person to click a link, the attacker persuades the machine to comply—often instantly, and often at scale.

Traditional email phishing relies on human behaviors: urgency, authority, distraction. AI agent phishing relies on something else:

  • AI systems follow instructions literally unless constrained.
  • AI agents are being granted tool access (mailbox, calendars, ticketing systems, document stores, CRMs, finance workflows).
  • Email formats contain “hidden” surfaces (HTML vs. plain text, metadata, formatting tricks) that humans don’t notice but models can parse.

Here’s the uncomfortable point: a person might hesitate before wiring funds or sharing a sensitive attachment. A poorly governed agent might not.

The hidden-instructions problem in email

Email isn’t just what you see in the client. Messages often include:

  • An HTML-rendered view
  • A plain-text alternative
  • Headers and MIME parts
  • Embedded styling and off-screen content

Attackers can insert invisible or de-emphasized text (for example, same-color text on a white background, tiny fonts, off-screen positioned elements, or content only present in one MIME part). A user reads a normal email; an AI assistant scanning the underlying content may ingest extra instructions.

When those instructions are crafted as a prompt injection (“ignore previous instructions,” “send the last invoice,” “summarize and include all attachments,” “export this conversation to…”) the agent can be steered into data exfiltration, unsafe automation, or policy bypass.

Why energy and utilities are especially exposed

Utilities are perfect targets for AI agent phishing because email is operational infrastructure. It’s where approvals happen, vendors coordinate, outages get escalated, and exceptions get negotiated.

A few patterns I keep seeing in utility AI deployments that raise the stakes:

1) The “operations inbox copilot” is becoming normal

Utilities are adding copilots to shared mailboxes like:

  • outage coordination
  • interconnection requests
  • vegetation management
  • storm logistics
  • procurement and vendor onboarding
  • customer escalations

These inboxes contain sensitive data (names, phone numbers, site access details, sometimes network diagrams or incident notes). They’re also high-volume—so teams want automation. Attackers want that too.

2) AI agents increasingly have “hands,” not just “eyes”

The highest-risk shift is moving from:

  • read-only assistants (summarize, classify, suggest)

to:

  • tool-using agents (create tickets, send emails, approve workflows, update records, trigger scripts)

In utilities, a tool-using agent might:

  • open or update an OT/IT incident ticket
  • request log exports
  • share a “helpful” document link
  • route an invoice for payment
  • grant a contractor access to a portal

If an attacker can influence those actions through email, the blast radius is real.

3) The seasonal factor: winter reliability pressure

It’s mid-December 2025. Many utilities are in a reliability mindset—winter events, holiday staffing, end-of-year vendor activity, and heavier reliance on automation to keep service levels up.

That’s exactly when a prompt-injection email that triggers an “autopilot” response can do the most damage: fewer eyes on the workflow, more exceptions granted, and more urgency-driven decisions.

How modern defenses are changing: from “indicators” to “intent”

Traditional secure email gateways are good at known bad indicators:

  • suspicious domains
  • credential-harvesting links
  • malware attachments
  • spoofed senders

AI agent phishing often contains none of those. It can be “clean” from a conventional standpoint, because the payload is text-based manipulation.

The defense trend highlighted by recent industry announcements is a shift to intent-based detection:

  • Identify hidden or mismatched content between HTML and plain text
  • Detect prompt injection patterns and manipulative instructions
  • Classify emails by likely agent impact (exfiltration attempt, policy override attempt, automation hijack)

One practical insight from the vendor side: doing this in-line (while mail is in transit) requires speed. Some providers are training smaller, specialized detection models for low-latency scanning rather than running heavyweight general models on every message.

A scale point worth knowing because it affects efficacy: one major email security provider reports scanning 3.5 billion emails per day, plus tens of billions of URLs and billions of attachments daily. That volume is why you’re seeing more “distilled” detection models optimized for throughput.

Why “pre-delivery” matters more with AI agents

If your AI agent can read and act on messages the moment they arrive, post-delivery detection is too late.

For utilities, the difference between:

  • quarantining a malicious message before it hits the shared mailbox
  • flagging it after the agent already summarized it, forwarded it, or created a ticket

…is the difference between a near-miss and an incident.

Practical controls utilities should implement now

You don’t need to stop using AI agents in email. You do need to treat them like what they are: a new privileged identity class.

1) Put AI agents on least privilege (and prove it)

Answer first: If your agent can send emails, fetch files, and update systems, it must have strict permissions and auditable boundaries.

Concrete steps:

  • Give agents read-only access by default
  • Separate “summarize/classify” from “execute” capabilities
  • Require step-up approval for actions like:
    • external forwarding
    • attachment access
    • payment changes
    • access provisioning
    • exporting or downloading documents

If you can’t explain what your agent can do in one sentence, it has too much access.

2) Add “agent-aware” email security requirements to procurement

When you evaluate secure email gateways or cloud email security layers, ask directly:

  • Do you detect prompt injection in email bodies?
  • Do you compare HTML vs. plain text for mismatches?
  • Can you detect invisible text and formatting tricks?
  • Do you provide pre-delivery enforcement with low latency?
  • Can you score emails by risk-to-agent rather than risk-to-human?

This is different from classic phishing detection. Treat it as a separate requirement.

3) Build an “allowlist of actions,” not a “blocklist of prompts”

Blocklists will fail because attackers will rephrase.

A better approach is to constrain agent behavior:

  • Only allow the agent to call approved tools
  • Only allow those tools to execute approved operations
  • Enforce data boundaries (for example, the agent can summarize an email but cannot attach files unless a human approves)

In other words: don’t try to predict every malicious instruction. Control what instructions can cause.

4) Instrument your agents like production systems

Utilities already monitor SCADA, AMI, and network equipment. Agents deserve similar rigor.

Minimum viable monitoring:

  • logging of tool calls (who/what/when/inputs/outputs)
  • anomaly detection on agent actions (volume spikes, new destinations, unusual downloads)
  • retention aligned to incident response needs
  • periodic review of “top actions” by agent

A useful rule: if an agent can trigger something that would wake up your on-call engineer, you should be able to reconstruct exactly what it did.

5) Train humans on “AI-adjacent” phishing

Security awareness training shouldn’t stop at “don’t click links.” Add scenarios like:

  • “This email looks normal, but it contains hidden instructions targeting the copilot.”
  • “Don’t paste raw email content into an agent and ask it to ‘do whatever it says.’”
  • “Verify unusual requests even if they came from an internal AI summary.”

The goal isn’t fear. It’s good operational hygiene.

Common questions from utility teams (quick answers)

“We don’t let AI send emails—are we safe?”

You’re safer, not safe. Even read-only agents can leak data through summaries, external sharing, ticket creation, or by copying sensitive content into downstream systems.

“Does this affect OT environments too?”

Indirectly, yes. Many OT incidents start in IT workflows—email, identity, vendor access, ticketing, documentation. If the agent can influence those pathways, OT risk rises.

“Is this just a Microsoft Copilot or Google Gemini issue?”

No. Any agent that ingests email content and has tool access can be targeted. The specific exploitation details vary, but the category is broader: prompt injection via untrusted content.

Where this is heading (and what to do next)

AI in energy and utilities is delivering real value—faster outage triage, better asset insights, more responsive customer operations. But the security model has to keep up. If an AI agent has inbox access, it is part of your attack surface. Treat it accordingly.

The most effective path I’ve seen is straightforward: combine pre-delivery detection for agent-focused attacks with tight governance on what agents are allowed to do. Detection catches the weird stuff. Governance limits the blast radius when something slips through.

If you’re rolling out copilots or agentic workflows in 2026 planning cycles, a good next step is to run a short assessment: map which mailboxes are agent-assisted, what tools those agents can call, and which actions could cause financial, operational, or regulatory harm.

What would change in your risk posture if an attacker stopped trying to fool your staff—and started trying to instruct your automation instead?