AI-based anomaly detection can stop AWS cryptomining caused by stolen IAM credentials—before attackers scale ECS/EC2 and rack up major cloud costs.

AI Stops AWS Cryptomining From Stolen IAM Credentials
Cloud cryptomining isn’t “just a cost spike.” It’s a high-signal indicator that someone is operating inside your AWS account with real permissions—often admin-level IAM credentials—and they’re moving fast.
A newly documented AWS cryptomining campaign showed exactly how fast: attackers had miners running within ~10 minutes of initial access after quickly enumerating permissions, probing quotas, and spinning up compute across ECS and EC2. The part that should worry security teams most isn’t the mining software. It’s the operational maturity: automation, quota-awareness, and persistence techniques designed to slow incident response.
This post is part of our AI in Cybersecurity series, and I’m going to take a clear stance: if you’re still relying on static IAM reviews and after-the-fact alerts to catch credential abuse, you’re already behind. AI-driven cloud threat detection is how you keep up with this speed.
What this AWS cryptomining campaign tells us (and why it matters)
This campaign is a textbook example of a modern cloud attack chain: valid credentials + automation + fast monetization.
The adversary reportedly started with compromised IAM user credentials that had admin-like privileges, then immediately moved into discovery:
- Enumerate environment and permissions
- Check EC2 quotas and whether they can run instances
- Deploy miners across ECS/Fargate and EC2
- Add persistence and anti-remediation friction
Two details stand out because they’re the “tell” you can build detections around.
The “DryRun” trick: testing permissions without leaving obvious damage
Attackers used the EC2 RunInstances API with the DryRun flag. That’s clever for two reasons:
- It confirms permissions without actually launching compute (so no immediate bill shock).
- It can reduce the forensic footprint, because nothing “material” happened yet—just permission probing.
From a defender’s perspective, DryRun is a gift. Legitimate teams do use it, but not usually in bursts, not from unusual IPs, and not paired with rapid follow-on role creation.
Persistence via termination protection: slowing down your response
The campaign also abused ModifyInstanceAttribute by setting disableApiTermination=True, making it harder to terminate instances via console/CLI/API until you explicitly re-enable termination.
That’s not “advanced malware.” It’s worse: it’s an attacker who understands how your responders work and is optimizing for dwell time.
Cryptomining isn’t the end goal in many environments—it’s the proof that identity controls and cloud monitoring failed.
How attackers scale cryptomining in AWS once they have IAM credentials
Once an attacker has valid IAM credentials, the cloud becomes a menu. This campaign shows a practical, repeatable playbook.
ECS cluster sprawl + malicious task definitions
The actor created dozens of ECS clusters (reports included cases exceeding 50 clusters in one intrusion), then registered a task definition pointing to a malicious Docker image that executed a script to run mining software.
This matters because ECS gives attackers:
- Fast deployment (no AMI building)
- Horizontal scaling (many tasks, many nodes)
- A “normal-looking” operational pattern if you only watch EC2
If your monitoring is EC2-centric, Fargate-based cryptojacking can hide in plain sight until your costs explode.
EC2 autoscaling groups tuned for quota exhaustion
The campaign also created autoscaling groups configured to scale aggressively (reported ranges like 20 to 999 instances) to consume quotas and maximize mining output.
They targeted a mix of:
- High-performance GPU / ML instance families
- Compute-optimized, memory-optimized, and general-purpose instances
This detail is important for detection: GPU and ML instance launches are often rare in non-ML workloads, and sudden bursts are a strong anomaly.
Secondary abuse: SES permissions for outbound phishing
Beyond mining, the actor created an IAM user and attached Amazon SES full access, which strongly suggests a pivot into phishing or spam operations from your trusted domain and IP space.
That’s the bigger risk: your compromised cloud identity becomes an attack platform against others.
Where AI-driven threat detection fits (and where rules alone fall short)
AI in cybersecurity gets overhyped when it’s framed as “replace analysts.” The real value here is simpler: AI is good at spotting behavior that doesn’t match your baseline—fast enough to matter.
This campaign’s speed (minutes) and breadth (ECS + EC2 + IAM + Lambda + SES) is exactly where static rules degrade:
- Rules are brittle across accounts and teams
- Legitimate ops often look “weird” to generic thresholds
- Attackers chain low-and-slow setup actions before the obvious mining spike
AI-based anomaly detection, when implemented well, catches the sequence, not just the symptom.
What “good” AI detection looks like for IAM credential abuse
You want models (or strong behavioral analytics) that can answer:
- Is this principal behaving like it usually does?
- Is the access path typical? (new IP, new ASN/hosting provider, new geo, new user agent)
- Is the action sequence normal? (permission probing → role creation → compute orchestration)
- Is the resource intent plausible? (spinning 40 new ECS clusters at 2 a.m. from a fresh IP)
A practical approach I’ve found works: treat identity activity as a graph.
- Nodes: principals, roles, services, IPs, regions
- Edges: API calls, role assumptions, policy attachments
- AI flags: unusual new edges, unusual edge frequency, unusual edge order
You don’t need magic. You need high-fidelity telemetry + baselines + fast response hooks.
High-signal detections to prioritize (starting this week)
If you only implement a few detections, focus on the ones that map tightly to this campaign pattern:
- Burst of
RunInstanceswithDryRun=trueby a principal that rarely touches EC2 - Creation of service-linked roles and new roles, followed by immediate policy attachment
- Sudden ECS cluster creation spikes (especially dozens in minutes)
- New autoscaling groups with extreme max sizes (hundreds)
disableApiTermination=trueset on newly created instances- New SES permission grants or new IAM users with SES full access
These are sequence-friendly signals—perfect for AI-assisted correlation.
Defensive blueprint: identity-first controls that blunt cryptojacking
The best cloud incident is the one that never becomes an incident. Here’s a pragmatic blueprint that reduces your risk even if credentials leak.
Replace long-term access keys with short-lived credentials
Long-lived access keys are still everywhere because they’re convenient.
They’re also a gift to attackers: steal once, persist for months.
Do this instead:
- Prefer role assumption and short-lived tokens
- Reduce access key usage to tightly controlled break-glass cases
- Set aggressive key rotation and monitor for unused keys
Apply least privilege like you mean it
“Admin-like” IAM users are common in real life. They’re also the reason these campaigns scale.
Concrete changes that help immediately:
- Remove wildcard permissions (
*) from human users - Separate provisioning permissions from runtime permissions
- Force sensitive actions behind role assumption and approvals
If someone steals a developer’s credentials, they shouldn’t be able to:
- Create roles
- Attach managed policies
- Create service-linked roles
- Launch fleets of instances
Make container trust explicit
Because the campaign used a malicious container image, container controls matter even if your IAM is solid.
Minimum bar:
- Only allow task definitions from approved registries
- Scan images and block known-bad signatures
- Alert on new image sources or unusual repository names
Keep CloudTrail and managed detections on—then automate containment
Telemetry without action becomes a postmortem.
Operationally, you want:
- Centralized logging for management events
- Continuous detection for IAM anomalies and compute spikes
- Automated playbooks that can: quarantine roles, pause autoscaling, and contain new deployments
If termination protection is used as friction, the playbook should explicitly:
- Identify instances with termination protection enabled
- Flip
disableApiTerminationback tofalse - Terminate and verify no re-provisioning occurs
“People also ask”: fast answers for security leaders
Is this an AWS vulnerability?
No. This kind of cryptomining campaign depends on valid, compromised credentials. That’s an identity security failure inside the customer side of the shared responsibility model.
Why do cryptominers target ECS/Fargate as well as EC2?
Because it’s fast, scalable, and can bypass EC2-focused monitoring. ECS/Fargate gives attackers parallel compute without managing hosts.
What’s the first sign of IAM credential compromise?
Often it’s not the miner. It’s odd IAM API activity: permission probing (DryRun), role creation, policy attachments, and unexpected service-linked roles.
Can AI really catch this faster than rules?
Yes—when it’s trained on your normal activity and correlates event sequences. Rules can catch spikes; AI is better at catching “this sequence doesn’t belong here” within minutes.
The stance I’d take going into 2026: treat identity telemetry as your early-warning radar
This AWS campaign is a clean reminder that cloud attacks aren’t slowed down by payload delivery anymore. They’re accelerated by legitimate APIs. If an attacker can authenticate, they can build infrastructure faster than most teams can open a ticket.
AI in cybersecurity earns its keep when it’s watching identity and cloud control plane behavior in real time, connecting small “weird” actions into a single story before your bill (or your customers) tells you.
If you had to prioritize one improvement before next quarter: deploy AI-driven anomaly detection focused on IAM credential abuse and cloud resource provisioning, and wire it into automated containment. Then ask yourself a uncomfortable—but useful—question: If an admin credential leaked tonight, would we stop it in 10 minutes—or notice it next week when finance calls?