AI Stops AWS Crypto Mining from Stolen IAM Keys

AI in Cybersecurity••By 3L3C

AI-driven anomaly detection can catch AWS crypto mining launched with stolen IAM keys—often within minutes. Learn the behaviors to detect and the controls to fix.

aws-securityiamcloud-threat-detectioncryptominingai-anomaly-detectionincident-response
Share:

AI Stops AWS Crypto Mining from Stolen IAM Keys

A well-run cloud crypto mining attack doesn’t start with malware. It starts with an IAM credential that never should’ve had admin-like power.

In a recent AWS campaign observed by Amazon’s automated monitoring and GuardDuty, attackers used compromised IAM user credentials to spin up crypto miners across ECS/Fargate and EC2—and they did it fast. Within 10 minutes of initial access, miners were already running. That speed is the point: by the time a human sees the bill spike or a dashboard go red, the attacker has already scaled.

This post is part of our AI in Cybersecurity series, and I want to make a clear case: AI-driven anomaly detection is one of the few defenses that can react on the same timeline as cloud-native attackers. Not because “AI is magic,” but because cloud attacks are largely behavioral. And behavior is something machines can watch continuously.

What actually happened in this AWS crypto mining campaign

The core takeaway is simple: no AWS vulnerability was required. The attacker already had valid credentials.

AWS reported that the campaign began with a discovery phase, then quickly progressed to provisioning and persistence. The interesting part isn’t “crypto mining exists.” It’s the attacker’s operational discipline: they tested permissions, mapped quotas, deployed across multiple services, and deliberately complicated cleanup.

Step 1: Fast enumeration + stealthy permission checks

After logging in with stolen IAM credentials, the actor enumerated resources and quotas. A detail worth remembering: they invoked the RunInstances API using the DryRun flag.

That’s a clever move.

  • DryRun lets them confirm whether the credential can launch instances without actually launching them.
  • That means less cost, less noise, and often fewer alarms (because nothing is “running” yet).

AI in cybersecurity works well here because this is exactly the sort of “quiet recon” pattern that looks normal in a single event, but suspicious in context.

Step 2: Multi-service deployment: ECS clusters, Fargate tasks, and EC2 autoscaling

Once the actor confirmed capability, they moved to scale.

AWS observed the creation of dozens of ECS clusters—in some cases over 50 clusters in one incident—followed by registering task definitions that referenced a malicious Docker image.

On the EC2 side, they created autoscaling groups configured to scale from 20 up to 999 instances. That’s not a typo. They aimed straight at service quotas to maximize compute consumption.

They also targeted high-performance GPU and machine learning instance types in addition to general-purpose compute. For defenders, that matters because GPU instances can turn a crypto miner from “annoying cost leak” into “budget-killing event” quickly.

Step 3: Persistence and “make cleanup harder” tactics

The most defender-hostile behavior AWS highlighted was the use of ModifyInstanceAttribute with disableApiTermination = True.

Translation: even if your SOC identifies the bad instances, your normal termination workflows may fail until termination protection is re-enabled.

This is a strong reminder of something many orgs still underestimate:

Attackers don’t just want compute. They want time.

They also created roles (including service-linked roles), a Lambda function that could be invoked broadly, and an additional IAM user with permissions attached that could support follow-on abuse (including potential phishing via email service permissions).

Why credential-based cloud attacks are so hard to catch

The uncomfortable truth: stolen credentials produce “legitimate” API calls. Your environment sees the right signatures, the right authentication flow, and the right service endpoints.

Traditional detection approaches often fall short because they’re built around:

  • known bad hashes
  • known malicious IPs
  • known indicators of compromise
  • perimeter assumptions that don’t exist in cloud control planes

Cloud crypto mining campaigns are different. They’re closer to fraud than malware. The attacker is “buying” compute with your identity.

The real cost isn’t only the cloud bill

Crypto mining is noisy financially, but the bigger risk is what it proves:

  • An attacker authenticated successfully
  • They likely discovered permissions boundaries
  • They created new roles/users/functions to persist
  • They tested what your incident response can and can’t do

Even if you stop the miners quickly, the breach may not be over. Mining is sometimes the “quick win” while access is monetized elsewhere.

Where AI-driven anomaly detection fits (and where it doesn’t)

AI won’t fix an over-privileged IAM user. But it can spot the moment that IAM identity starts behaving like an attacker.

Think of AI here as a system that continuously answers:

  • Is this identity acting like it usually acts?
  • Is this API call sequence typical for this role/team/tooling?
  • Does this provisioning behavior match approved patterns?

What AI can detect early in this attack chain

1) “DryRun” reconnaissance patterns

DryRun isn’t malicious. Plenty of automation uses it. But a model that understands baselines can flag:

  • a user that never calls RunInstances suddenly doing so
  • repeated DryRun calls across regions
  • DryRun followed by role creation and cluster sprawl

2) Sudden control-plane bursts

Humans and normal CI/CD systems don’t typically create 30–50 ECS clusters rapidly. AI models are good at flagging:

  • unusual provisioning velocity
  • unusual diversity of services touched in a short window (IAM → ECS → EC2 → Lambda)
  • deviation from change windows

3) Quota-maxing autoscaling behavior

An autoscaling group configured up to 999 instances is rarely legitimate. A good detection system should treat “scale ceiling anomalies” as first-class signals.

4) Termination-protection misuse

Setting disableApiTermination=True isn’t always wrong (some production teams use it), but it’s very suspicious when it appears alongside:

  • new instances launched outside standard templates
  • new roles created moments earlier
  • instance types associated with compute spikes

What AI cannot replace

AI doesn’t remove the need for fundamentals:

  • least privilege
  • MFA
  • short-lived credentials
  • audited role assumptions
  • guardrails around who can create roles, clusters, and scaling groups

If your IAM is wide open, AI becomes a smoke alarm in a house full of gasoline.

A practical defense plan: stop the miner and close the door

If you’re building your 2026 cloud security plan right now (and many teams are, heading into the new budget year), use this campaign as a checklist for identity-first defense.

Lock down IAM so stolen credentials can’t do much

Start here because it reduces blast radius immediately.

  • Remove long-term access keys wherever possible; prefer short-lived sessions.
  • Enforce MFA for all human users, especially any console-capable identity.
  • Apply least privilege aggressively—most IAM users should not be able to create roles, clusters, or scaling groups.
  • Restrict high-risk APIs (CreateRole, CreateServiceLinkedRole, PassRole, autoscaling creation) to tightly governed roles.

If you can’t remove privileges due to operational constraints, add compensating controls (approvals, just-in-time elevation, or scoped permissions boundaries).

Put AI detection where it has the most leverage

You get the best ROI from AI in cybersecurity when you feed it the right signals and allow it to trigger fast action.

High-signal inputs to prioritize:

  • CloudTrail events for IAM, EC2, ECS, Lambda, Autoscaling
  • identity context (role, team, normal regions, normal services)
  • provisioning context (templates used, tags, standard images)

High-value detections for this campaign pattern:

  • identity calling RunInstances/RegisterTaskDefinition for the first time
  • rapid, repeated ECS cluster creation
  • autoscaling max size anomalies
  • termination protection enabled on newly created compute
  • new IAM user + powerful managed policy attachment shortly after login

Make remediation resilient to attacker “speed bumps”

This campaign specifically tried to slow down incident response.

Build runbooks that assume attackers will:

  • enable termination protection
  • create multiple clusters/services
  • spread across regions
  • create new roles/users to regain access

Practical steps that help:

  1. Pre-authorize a break-glass role that can override termination protection and delete rogue resources.
  2. Automate quarantine actions (tag-based isolation, SCP-like restrictions, or forced credential resets) for identities that trip high-confidence alerts.
  3. Require standard tags on approved infrastructure and alert on untagged provisioning at scale.

“People also ask” questions (answered directly)

How do attackers use AWS for crypto mining?

They authenticate (often with stolen IAM credentials), enumerate quotas and permissions, then provision compute on ECS/Fargate and EC2—sometimes using autoscaling to consume as much capacity as possible.

Why is IAM credential theft so dangerous in AWS?

Because AWS is controlled through API calls. If an attacker has valid credentials with broad permissions, they can create resources, roles, and persistence mechanisms without exploiting a software vulnerability.

Can AI detect cloud crypto mining before costs spike?

Yes—when AI is applied to identity behavior and control-plane activity (CloudTrail, provisioning velocity, unusual service combinations), it can flag attacks during the reconnaissance and early provisioning phases.

What to do next if you want fewer “surprise miners” in 2026

Credential-based cloud attacks are getting more procedural: test with DryRun, expand via ECS and EC2, then slow down responders with termination protection and persistence. That’s not random opportunism. It’s a playbook.

If you take one stance from this incident, make it this: cloud security is now identity security, and identity security needs continuous, behavior-based monitoring. That’s where AI in cybersecurity earns its keep—spotting the odd sequence of actions while it’s happening, not after finance calls about the bill.

If you’re assessing your readiness, ask yourself: If an IAM credential with broad permissions is stolen tonight, what would stop an attacker from running miners within 10 minutes—and would your team know before the first hour is over?