Security Copilot is coming with M365 E5. Learn what SCUs, new AI agents, and Agent 365 mean for SOC workflows, governance, and rollout planning.

Security Copilot in M365 E5: What Changes for SOCs
Microsoft’s decision to bundle Security Copilot into Microsoft 365 E5 shifts AI in cybersecurity from “nice-to-pilot” to “hard-to-ignore.” The biggest change isn’t that more teams will have access to AI—it’s that many security teams will now have AI sitting directly inside the workflows they already live in: identity, endpoint, email, data protection, and compliance.
Most companies get this wrong: they treat an AI assistant like a new tool to buy and “turn on.” But once Security Copilot is included with E5, the real work becomes operational—governance, permissions, runbooks, and deciding what you’ll automate versus what you’ll only summarize. If you’re running a SOC (or managing one), this bundle is a forcing function.
This post breaks down what Microsoft actually announced, what Security Compute Units (SCUs) mean in practice, where the new agent-based security model helps (and where it can hurt), and how to roll this out in a way that reduces risk and produces measurable outcomes.
What Microsoft’s bundle really signals
Bundling Security Copilot with M365 E5 is a pricing move, but it’s also a product strategy: AI becomes part of the security platform, not an add-on. When AI is purchased separately, it stays experimental. When it’s bundled, it becomes part of normal operations—and executives start asking why it isn’t being used.
Here’s what’s materially different about this announcement:
- Default availability drives adoption. A separate SKU creates friction: budget approvals, uncertain ROI, and “we’ll revisit next quarter.” Bundling removes most of that.
- The platform is going agent-first. Microsoft previewed 12 new security agents designed to operate across Defender, Entra, Intune, and Purview. That’s a major bet on “agentic AI for security operations,” not just chat-based assistance.
- Third-party agent expansion is the point. Microsoft highlighted 30+ partner agents (for example: identity, cloud threat detection, endpoint triage). That’s a hint that the long-term plan is an ecosystem—your AI security assistant isn’t one model, it’s a control plane.
I like this direction. Security teams don’t need another dashboard. They need fewer handoffs and faster decisions, with good audit trails.
How SCUs work (and why they matter to planning)
The bundle doesn’t mean unlimited use. Microsoft is allocating usage via Security Compute Units (SCUs).
The published allotment:
- 400 SCUs per month for each 1,000 paid M365 E5 user licenses
- Maximum of 10,000 SCUs per month
Examples Microsoft provided:
- 400 user licenses → 160 SCUs/month
- 4,000 user licenses → 1,600 SCUs/month
The practical impact: SCUs force prioritization
SCUs create a reality that security leaders should embrace: you can’t run every task through AI all day. You have to choose high-value use cases.
Use SCUs like a budgeting tool:
- Reserve a “production” pool for time-sensitive workflows (phishing triage, identity investigations, high-confidence incident enrichment).
- Create an “experimentation” pool for new prompts, agent tuning, and playbook design.
- Track burn rate per workflow so you can answer the CFO question: “What did we get for this?”
If you don’t measure SCU consumption by scenario, you’ll end up rationing randomly—usually right when you need the system most.
A blunt take: SCUs won’t fix understaffing
If you’re expecting SCUs to replace analysts, you’re going to be disappointed. The near-term value of AI in cybersecurity is throughput and consistency:
- faster summarization of messy incidents
- more complete evidence collection
- better cross-tool correlation
- fewer “tribal knowledge” bottlenecks
It’s not magic. It’s operational efficiency.
The 12 new agents: where AI actually helps the SOC
Agent-based security is useful when the work is repetitive, multi-step, and spread across systems. Microsoft’s new agents target exactly that.
At a high level, Microsoft previewed:
- Defender agents: threat detection, phishing triage, threat intelligence support, and attack disruption
- Entra agents: conditional access, identity protection, application lifecycle
- Intune agents: policy configuration, vulnerability and endpoint compliance
- Purview agents: data protection and compliance auditing
Three SOC workflows that should improve immediately
1) Phishing triage and mailbox hunting
Phishing response is high-volume and full of context switching. AI can help by:
- summarizing headers, links, and sender infrastructure
- clustering similar reports into a single incident thread
- drafting containment steps (block sender, purge mail, search org-wide)
The win isn’t just speed—it’s reducing inconsistency between analysts.
2) Identity investigations (Entra-focused)
Identity-driven attacks are often “death by a thousand signals”: impossible travel, new device registration, MFA fatigue patterns, token abuse. An agent that can:
- gather sign-in logs
- map access patterns
- compare behavior to baseline
…can save hours per case. But only if permissions and logging are configured correctly.
3) Endpoint triage and compliance drift (Intune + Defender)
Many incidents get worse because endpoints aren’t where you think they are: outdated policies, missing patches, stale compliance claims. An agent that can detect drift and propose a remediation path is valuable—especially in large fleets.
Where teams will get burned: “autonomous” actions without guardrails
Attack disruption and automated containment sound great until they interrupt business-critical processes.
Set a rule early:
- AI can recommend by default.
- AI can act automatically only in narrow, pre-approved scenarios (for example: known malicious domains, confirmed mass phishing, or highly confident impossible-travel plus risky sign-in chains).
Your first month should be mostly “recommend and log,” not “execute and hope.”
Agent sprawl is the next attack surface
Microsoft also introduced Agent 365, described as a control plane/registry to manage and govern enterprise agents. This matters because the industry is heading toward massive agent growth; Microsoft cited an IDC forecast of 1.3 billion agents deployed by enterprises within three years.
That number is less important than what it implies: agents will have access—to data, identities, and actions. Access is the currency of breaches.
What “AI agent governance” needs to include
If you’re rolling out Security Copilot broadly, treat agents like privileged applications:
- Identity and access controls: least privilege, scoped permissions, separate identities for agents
- Data boundaries: which workspaces, mailboxes, repositories, and labels the agent can see
- Action approvals: who can authorize containment, user disablement, or policy changes
- Auditability: prompt logs, actions taken, evidence used, and who approved what
- Third-party risk: partner agents should be reviewed like SaaS integrations, not “plugins”
A clean one-liner for leadership: “Agents are automation with credentials. Manage them like you manage admins.”
How to roll this out without creating chaos (a 30–60 day plan)
The fastest way to fail with AI-powered security tools is broad rollout without a usage model. Here’s what works.
Days 1–10: Pick two use cases and define “done”
Start small, but measurable.
Good starter use cases:
- phishing triage for one business unit
- identity investigations for high-risk sign-ins
Define success metrics that aren’t fluffy:
- MTTD/MTTR reduction for selected incident type
- time-to-triage (minutes) and time-to-containment
- analyst touches per incident (how many handoffs)
- false positive closure time
Days 11–30: Build runbooks and guardrails
Write down what the agent can do, and what it can’t.
- Define “recommend-only” versus “auto-action” scenarios
- Create approval steps for disruptive actions
- Standardize prompt templates (so results are repeatable)
- Decide where outputs live (tickets, incident notes, case management)
This is boring work. It’s also the difference between value and confusion.
Days 31–60: Expand carefully and operationalize SCU budgeting
Once you’ve proven two workflows, add one more and scale.
- allocate SCUs per workflow
- monitor SCU burn weekly
- identify “high cost, low value” prompts and remove them
If SCUs run low, you want to cut experimentation first, not incident response.
People also ask: practical questions SOC leaders should answer
Will bundling Security Copilot replace my SIEM or SOAR?
No. Expect Copilot to augment investigation and response by summarizing, correlating, and recommending actions across Microsoft’s stack (and partners). Your SIEM/SOAR still matters for log scale, correlation rules, playbook execution, and long-term detection engineering.
Is Security Copilot only useful if we’re “all-in” on Microsoft?
You’ll get the most benefit if you use Defender/Entra/Intune/Purview heavily. That said, partner agents suggest Microsoft knows hybrid shops need cross-tool workflows. The real question is whether you can govern those integrations well.
What’s the biggest hidden risk?
Over-trusting confident language. AI outputs can sound authoritative even when they’re incomplete. Treat Copilot as a strong analyst assistant—then require evidence: logs, timestamps, indicators, and the chain of reasoning.
What this means for AI in cybersecurity going into 2026
Security teams are heading into 2026 with the same constraint they’ve had for years: too many alerts, too few hours, and attackers who automate everything. Bundling Security Copilot into M365 E5 is Microsoft saying, “AI assistance is now table stakes for enterprise security operations.”
If you’re an E5 customer, the smartest move is to treat this as an operational program, not a feature launch: pick a couple high-volume workflows, set SCU budgets, build governance around agents, and prove measurable improvements in response time and consistency.
If you want help turning AI security assistants into a controlled, auditable part of your SOC—use cases, guardrails, and metrics—my recommendation is to start with a short assessment and a 60-day pilot plan. The teams that do this well won’t just resolve incidents faster; they’ll finally stop wasting senior analyst time on work that machines are better at.
Where do you want AI to take work off your plate first: phishing, identity, endpoint triage, or compliance investigations?