AI agents are reshaping the web into machine-to-machine interactions. Here’s how utilities can prepare for the agentic Web with secure, scalable AI systems.

AI Agents and the Agentic Web: What Utilities Must Do
The internet is quietly preparing for a new “main character”: software.
Researchers are arguing that autonomous AI agents—not humans clicking links—could soon become the primary users of the web. Dawn Song (UC Berkeley), an AI safety and security researcher, describes a future where your agent negotiates directly with a retailer’s agent, reads thousands of pages in seconds, and completes purchases and bookings without you micromanaging every step.
If you work in energy and utilities, this should feel familiar. Grid operations are already becoming more autonomous: forecasting demand, dispatching flexible loads, optimizing DERs, and scheduling maintenance with minimal human intervention. The big shift is that the same “agentic” pattern is heading for the public internet—meaning utilities will be operating in a world where machine-to-machine transactions, identity, and security become the default.
The agentic Web is a redesign, not a new UI
Answer first: The agentic Web changes the internet from “websites for people” to services for agents, where intent, negotiation, and execution happen automatically.
Today’s web is built around human constraints: we scroll, we compare a handful of options, we fill out forms, and we’re slow. Agents don’t have those limits. They can read thousands of documents in parallel, keep state across tasks, and iterate through options until they hit the best outcome for a user’s goal.
Dawn Song’s example is simple but telling: buying clothes. On today’s web, you type a search query, look at a grid of results, and click through a few pages. In an agentic model, your agent communicates directly with the seller’s agent: “Here are my preferences, budget, sizing history, and delivery constraints—what are the best matches?” That interaction looks less like browsing and more like API-level negotiation.
Utilities should recognize the shape of this change. Grid modernization is moving from dashboards to automated decision loops:
- Forecast → optimize → dispatch → verify → learn
- Detect anomaly → classify → create work order → route crew → confirm fix
The agentic Web simply extends this idea into the broader digital economy.
Why this matters for energy & utilities right now
December 2025 is a moment when utilities are balancing three pressures at once: rising peak volatility, accelerated renewables and storage interconnection, and expanding cybersecurity obligations. An agentic Web adds a fourth pressure: your customer interactions, supplier interactions, and even regulatory reporting may increasingly be mediated by agents.
If you wait until customers’ AI agents start “shopping” your tariffs and service plans automatically, you’ll be reacting under pressure. Planning now is cheaper.
The stack behind AI agents: orchestration, identity, and payments
Answer first: Agents don’t work at scale without three capabilities: multi-agent orchestration, verifiable identity/permissions, and safe transaction mechanisms.
Song points to emerging open protocols for agent tool use and agent-to-agent communication (examples mentioned include Anthropic’s MCP and Google’s A2A). The deeper point isn’t which protocol wins—it’s that the internet will need standard ways for agents to talk, prove who they are, and exchange value.
For utilities, those same building blocks already exist internally, just with different names:
- Orchestration → workflow engines, SCADA/EMS integration, data pipelines
- Identity → IAM, privileged access management, device identity
- Payments/value exchange → billing systems, settlement, wholesale market participation
The difference is where the boundary sits. In an agentic world, the boundary moves outward. Your systems may need to securely interact with:
- A customer’s energy management agent (home, EV, thermostat)
- A corporate procurement agent negotiating electricity supply
- A DER aggregator’s dispatch agent
- A market operator’s automated compliance and bidding interfaces
A practical utility analogy: “grid agents” meet “web agents”
Utilities are already experimenting—explicitly or implicitly—with agent-like automation:
- Demand forecasting agents that continuously retrain on weather, calendar effects, and local events
- Predictive maintenance agents that prioritize inspection based on failure probability and consequence
- Voltage optimization agents that tune setpoints across feeders
Now imagine procurement: a utility’s sourcing agent requests quotes for transformers, lead times, and logistics constraints. A supplier’s agent responds with negotiated terms. Another agent checks compliance requirements, and a finance agent confirms budget and payment conditions.
That’s not science fiction. It’s the same pattern as grid optimization—applied to the rest of the business.
Efficiency is real—but the security risks are bigger than most teams expect
Answer first: Autonomous agents expand the attack surface because they hold context, credentials, and decision authority, often all at once.
Song is blunt about it: this is “uncharted territory.” Agents can have high privileges (buying things, accessing sensitive data, triggering actions), and they can be attacked in ways that cause them to leak information or take actions against user intent.
For utilities, the alarming part isn’t that agents can be tricked. It’s that utilities operate in environments where:
- Safety and reliability matter more than convenience
- Systems are long-lived and interconnected
- Privileged access is common (OT, IT, cloud, vendor portals)
What “agent risk” looks like in energy operations
Here are four concrete failure modes utilities should plan for—because they map directly to agentic systems:
-
Prompt injection and instruction hijacking
- An agent reads a document (email, ticket, PDF, web page) that contains malicious instructions.
- Result: the agent changes behavior, exposes data, or executes the wrong workflow.
-
Credential and token exfiltration
- Agents often use tool connectors, API keys, session cookies, or OAuth tokens.
- Result: a compromised agent becomes a “skeleton key” to multiple systems.
-
Unauthorized actions via over-broad permissions
- Agents are useful only when they can do things. Teams tend to grant wide access “for now.”
- Result: an agent makes an irreversible action (purchase, configuration change, dispatch) without sufficient controls.
-
Multi-agent amplification
- In agent-to-agent workflows, one compromised agent can influence many others.
- Result: errors and exploits spread faster than human review can catch.
A sentence I keep coming back to: Agents turn security from protecting endpoints to protecting intentions. If an attacker can change the agent’s “understanding” of what you want, they can win without breaching a firewall.
Security-by-design for AI agents (what “good” looks like)
Utilities don’t need to wait for perfect standards. They can adopt patterns now that reduce risk materially:
- Least-privilege tool access: grant agents narrow, task-specific permissions; expire credentials quickly.
- Action gating: require human confirmation (or second-agent confirmation) for high-impact actions like dispatch, switching, customer credit decisions, or large purchases.
- Provenance and sandboxing: treat all external content as untrusted; isolate the agent’s “reading” environment from its “acting” environment.
- Auditability by default: log not just outputs, but inputs, tool calls, permissions used, and the reason path.
- Continuous red teaming: Song mentions multi-agent red teaming; utilities should treat this like vulnerability management—ongoing, measurable, and tied to release gates.
Will we get one web or two? Expect a blended world
Answer first: The future is a hybrid web where humans and agents both interact—often together in the same workflow.
Song expects a blend rather than a full replacement. Humans won’t disappear. But the dominant interaction pattern changes: agents do the searching, filtering, and negotiating; humans handle final judgment for high-stakes choices.
Energy and utilities should assume the same operating model:
- Operators won’t be replaced by “grid agents.”
- Operators will supervise more automated decisions across more assets.
- The limiting factor will be trust, governance, and security—not model accuracy.
This is where many AI programs go wrong. They focus heavily on model performance and underinvest in operational controls: access, logging, approvals, and incident response for AI-driven actions.
How utilities can prepare for the agentic Web (a 90-day plan)
Answer first: Preparation starts with inventory and governance, then moves to integration patterns and security testing.
If your organization wants to benefit from autonomous AI without creating new risk, this sequence works.
Step 1: Inventory “agent-like” workflows already in your business
Create a shortlist of workflows where AI or automation is already making recommendations or triggering actions:
- Forecasting (load, DER output, outage risk)
- Maintenance planning and work management
- Contact center triage and customer self-service
- Procurement and vendor management
- Regulatory reporting and document processing
Then label each workflow by impact level (low/medium/high) based on safety, cost, and customer impact.
Step 2: Define what an agent is allowed to do—before you build it
A practical policy statement that prevents chaos:
- What data can the agent read?
- What tools can it call?
- What actions can it take without approval?
- What triggers a human review?
- What’s the rollback plan if it acts incorrectly?
In utilities, “who can open a breaker” is a sacred question. Treat “who can trigger a dispatch or change a setpoint” the same way—whether the actor is human or software.
Step 3: Implement agent controls that mirror OT safety practices
The best inspiration for agent governance is not a consumer AI playbook. It’s OT engineering discipline:
- Interlocks
- Two-person rules for critical operations
- Change management
- Alarm management
- Post-event analysis
Translate that discipline into digital workflows:
- Dual approval for high-impact tool calls
- Rate limits and anomaly detection on agent actions
- Immutable audit logs for investigations
Step 4: Run multi-agent red teaming against your own systems
Don’t limit testing to “does it answer correctly?” Test:
- Can it be tricked by a malicious document?
- Can it be induced to reveal secrets?
- Can it be socially engineered through ticket text?
- Can it be pushed into unsafe actions through ambiguous instructions?
Treat the results like any other security program: prioritize, remediate, retest.
What this means for the AI in Energy & Utilities roadmap
The agentic Web is a reminder that AI isn’t just an analytics layer. It’s becoming an execution layer.
Utilities that get ahead will do two things in parallel:
- Build more autonomous systems for grid optimization, demand forecasting, and predictive maintenance.
- Build the trust framework—identity, permissions, audit, and testing—so those systems don’t become the next major incident.
The internet may soon be optimized for agents negotiating with agents. The question for energy leaders is sharper: when customers, suppliers, and markets show up with AI agents, will your infrastructure know how to interact safely—and profitably—without slowing everything down?