AI-driven IAM defense can detect AWS cryptomining in minutes by spotting abnormal credential use, compute spikes, and persistence tactics. Learn how to respond fast.

Stop AWS Cryptomining with AI-Driven IAM Defense
On November 2, 2025, AWS detected something that should make every cloud team pause: attackers used compromised IAM credentials to stand up cryptocurrency miners in under 10 minutes. Not malware exploiting a zero-day. Not a cloud service vulnerability. Just valid credentials, a fast script, and enough permissions to turn your account into someone else’s mining rig.
Most companies get this wrong by treating cloud cryptomining as a “cost anomaly” problem. It’s not. It’s an identity problem first—and an automation problem second. If you can’t spot abnormal IAM behavior quickly (and respond automatically), you’re playing defense at human speed against attackers who operate like a CI/CD pipeline.
This post is part of our AI in Cybersecurity series, and this incident is a clean example of where AI helps in a practical way: behavioral detection of credential misuse, anomaly detection in cloud resource activity, and automated containment that doesn’t wait for a ticket queue.
What happened in this AWS cryptomining campaign (and why it’s different)
The core move was straightforward: an unknown actor obtained IAM user credentials with admin-like permissions, then quickly enumerated the environment and deployed miners across Amazon ECS (Fargate) and Amazon EC2.
The details matter because they show intent and maturity:
The “DryRun” trick: testing permissions without leaving a big bill
Attackers invoked the RunInstances API with the DryRun flag. That’s a smart reconnaissance step because it lets them confirm they can launch instances without actually launching them.
From a defender’s standpoint, this is a huge signal:
- It’s not normal for many human admins to repeatedly test
RunInstanceswithDryRun. - It’s even less normal when that behavior appears suddenly for an identity that hasn’t done it before.
AI-based behavioral analytics can treat this as pre-attack staging, not “maybe harmless experimentation.” In practice, you want your detections to fire before the first miner starts, not after your CFO asks why yesterday’s cloud spend doubled.
ECS at scale: dozens of clusters, malicious task definitions
In observed cases, the actor created dozens of ECS clusters, sometimes exceeding 50 clusters in a single attack, then registered a task definition pointing to a malicious Docker image (later removed).
That pattern—rapid cluster sprawl + new task definitions + high CPU intent—is a classic candidate for anomaly detection:
- Unusual rate of
CreateCluster,RegisterTaskDefinition, andCreateService - New container image references that have no historical precedent in your org
- Tasks requesting high CPU allocation outside normal baseline
EC2 autoscaling as an attack multiplier
They also created autoscaling groups configured to scale from 20 to 999 instances. The goal is obvious: consume quotas, maximize compute, and print money (for them) until someone notices.
Here’s my stance: if you allow identities to create autoscaling groups that can scale to 999 without strict guardrails, you’re effectively leaving a loaded weapon on the table.
The persistence move defenders miss: termination protection as a weapon
The standout technique in the report was the use of ModifyInstanceAttribute with disableApiTermination=True, which enables instance termination protection.
This matters because it directly targets incident response:
- Your responders (or automated cleanup) try to terminate suspicious instances.
- The termination fails.
- Time gets wasted figuring out why.
- Miners keep running.
“Termination protection isn’t just an operational setting—it can be attacker persistence.”
A lot of teams only alert on “instances launched” or “CPU spiked.” Fewer teams alert on “someone turned on termination protection in the middle of an incident.” That’s a mistake.
AI can help here by correlating actions into intent:
- New instances launched and then termination protection enabled shortly after
- Termination protection enabled by an identity that has never used that API
- Multiple instances receiving the same attribute change in a tight time window
That sequence is more than suspicious—it’s tactical.
Why this is an IAM problem first (not a cloud problem)
AWS emphasized this wasn’t an AWS vulnerability. The attacker already had valid credentials.
That aligns with what we keep seeing across cloud incidents: identity is the control plane. If an attacker has credentials with broad privileges, they don’t need exploits. They just need API calls.
In this campaign, the attacker also:
- Created IAM roles using
CreateServiceLinkedRoleandCreateRole - Attached a Lambda execution policy (
AWSLambdaBasicExecutionRole) - Created a Lambda function callable by any principal
- Created an IAM user (reported as
user-x1x2x3x4) and attached Amazon SES full access, likely to support phishing or spam
This is a common pattern: cryptomining is the quick monetization play, but the IAM expansion is what turns a “bill shock” into a longer-term security problem.
How AI can stop cryptomining before it scales
AI shouldn’t be a buzzword bolted onto alerts. Used properly, it changes when you find the problem (earlier) and how you respond (faster, safer, more consistently).
1) Behavioral detection for IAM credential misuse
The fastest wins come from UEBA-style baselining for identities (human and machine):
- What APIs does this principal normally call?
- From what IP ranges and geographies?
- At what times?
- In what sequence?
In this incident, the sequence itself is incriminating:
- Enumerate permissions and quotas
RunInstanceswithDryRun- Create roles
- Create ECS clusters + task definitions
- Launch miners and scale
- Enable termination protection
AI models that score sequence anomalies (not just single events) can flag this as an attack chain.
Practical detection idea: assign a high-risk score when DryRun permission checks are followed within minutes by cluster creation and compute provisioning. Humans don’t operate that way. Scripts do.
2) Anomaly detection for crypto-mining resource behavior
Crypto miners have predictable cloud footprints:
- Sustained high CPU/GPU utilization
- Rapid fleet expansion
- New workloads with no deployment metadata (no CI pipeline fingerprints, no standard tags)
AI systems can combine telemetry (CloudTrail, workload metrics, container signals) to detect:
- Unusual CPU allocation requests in ECS task definitions
- Compute spikes inconsistent with historical baselines
- “Burst then plateau” utilization typical of mining
The goal isn’t just detection. It’s classification: “this looks like mining,” not merely “CPU is high.”
3) Automated containment that doesn’t break production
Automation is where teams get nervous, and I get it. You don’t want an auto-remediation playbook to nuke the wrong thing.
So the best pattern is graduated response:
-
Soft containment (low blast radius)
- Detach high-risk policies from the suspicious principal
- Revoke active sessions / rotate keys
- Block suspicious source IPs at the edge where possible
-
Resource guardrails (stop the bleeding)
- Temporarily restrict instance types (GPU/ML) via SCPs
- Freeze autoscaling to a safe maximum
- Require approval tags for new ECS services
-
Hard containment (when confidence is high)
- Quarantine the account segment / restrict org-level actions
- Disable termination protection (with change control logging)
- Terminate known-bad resources
AI helps by increasing confidence through correlation, so you can trigger stronger actions with fewer false positives.
A defender’s checklist mapped to this attack chain
If you want a short, actionable plan that aligns to what happened here, use this as your baseline.
Identity hardening (prevents the initial foothold from becoming full control)
- Enforce MFA for all users, especially privileged identities
- Prefer temporary credentials over long-term access keys
- Apply least privilege ruthlessly (admin-like IAM users are a recurring root cause)
- Audit who can call:
CreateRole,AttachRolePolicy,CreateServiceLinkedRoleiam:PassRole(often overlooked, frequently abused)
Compute guardrails (limits blast radius even if credentials are stolen)
- Enforce policy constraints on:
- Maximum autoscaling sizes
- Allowed instance families (especially GPU/ML)
- Regions where compute can be launched
- Require tagging standards and deny launches without required tags
- Alert on large deviations in ECS cluster count and task definition registrations
Detection engineering (catches the “10-minute” window)
- Log and analyze CloudTrail across services, not just EC2
- Monitor for:
RunInstanceswithDryRun- Rapid
CreateCluster/RegisterTaskDefinition/CreateService ModifyInstanceAttributeenabling termination protection- Sudden SES permission grants (
AmazonSESFullAccess) to new users/roles
AI-driven signals to prioritize (high value, low noise)
- Identity calling new APIs it has never used before
- Burst of IAM changes + compute provisioning in a short window
- New container image references never seen in your environment
- Autoscaling max set abnormally high relative to historical norms
People also ask: “Is cryptomining just a cost issue?”
It starts as cost. It rarely ends there.
A cryptomining incident proves three things:
- An attacker can authenticate into your cloud.
- They can provision and persist workloads.
- They can often modify IAM to maintain access or expand into phishing (SES access is a strong clue).
Treat it like an intrusion, not a billing anomaly.
What to do this week if you run AWS
If you’re looking for concrete next steps you can execute before year-end change freezes, do these four:
- Hunt for
DryRunpermission tests followed by compute actions within 15 minutes. - Review who can enable termination protection (
ModifyInstanceAttribute) and alert on its use. - Put a hard ceiling on autoscaling max (by policy) that matches your real business needs.
- Deploy AI-assisted behavioral analytics for IAM principals so unusual sequences trigger fast containment.
Cloud attacks are increasingly “API-native.” That’s exactly why the AI in Cybersecurity approach fits: machine-speed attacks need machine-speed detection and response.
If your detections only trigger after miners are running, you’re already late. The better question is: which identity behaviors in your environment would you want to auto-contain in the first 60 seconds?