AI vs. Stolen AWS IAM Keys: Stop Crypto Mining Fast

AI in Cybersecurity••By 3L3C

Compromised AWS IAM keys can launch crypto miners in minutes. Learn the attack pattern and how AI-driven detection stops identity abuse before costs spike.

AWS securityIAMcrypto miningcloud threat detectionAI security analyticsincident response
Share:

Featured image for AI vs. Stolen AWS IAM Keys: Stop Crypto Mining Fast

AI vs. Stolen AWS IAM Keys: Stop Crypto Mining Fast

A crypto miner doesn’t need a fancy zero-day to hurt you. It needs one set of valid AWS IAM credentials with enough permissions to spin up compute at scale.

That’s what makes the AWS crypto mining campaign reported this week so unsettling: attackers moved from access to active mining in about 10 minutes. They weren’t “breaking into AWS.” They were operating inside the rules of your account—using APIs exactly the way your automation would.

This post is part of our AI in Cybersecurity series, and I’m going to be blunt: most organizations are still trying to solve credential compromise with controls built for a slower era—manual reviews, periodic audits, and alerts that fire after the bill arrives. The practical fix is AI-driven cloud security that can detect identity misuse in real time, stop abusive scaling, and speed up incident response when attackers try to persist.

What actually happened in this AWS crypto mining campaign

Answer first: The attackers used compromised IAM user credentials (with admin-like power), quickly mapped the environment, validated permissions quietly, then deployed miners across ECS and EC2 while adding persistence tricks to slow down responders.

Amazon observed activity that started with resource and permission enumeration, then expanded into crypto mining on ECS (including Fargate) and EC2, including expensive instance families like GPU and machine learning instances.

The “quiet check” attackers use: DryRun

One detail you should care about: attackers tested whether they could launch instances by calling the EC2 RunInstances API with DryRun enabled.

That matters because DryRun validates permissions without launching anything. No instances. No immediate cost spike. Less obvious telemetry for teams that only watch for “new instances created.”

If you want a simple rule of thumb: permission-checking calls right before a burst of resource creation are high-signal indicators of credential abuse.

Scaling abuse: from “some miners” to “all your quotas”

After establishing they could operate, the actor:

  • Created dozens of ECS clusters (sometimes more than 50)
  • Registered a task definition pointing to a malicious container image
  • Created autoscaling groups targeting anywhere from 20 to 999 instances

This is cloud-native abuse. It’s not subtle, but it’s fast—and it overwhelms teams that rely on ticket-driven processes or human approvals.

The persistence technique that slows incident response

Answer first: Attackers used instance termination protection (disableApiTermination=True) so responders couldn’t terminate infected instances until they explicitly reversed that setting.

The campaign stood out because it used ModifyInstanceAttribute to set termination protection. That blocks common “kill it now” response steps across console, CLI, and API.

This isn’t theoretical. It’s the sort of “speed bump” that turns a 15-minute cleanup into a multi-hour, high-stress incident—especially if your remediation is automated and assumes it can terminate resources immediately.

A good cloud attacker doesn’t just create resources. They create friction for the people trying to delete them.

From a defensive standpoint, you should treat unexpected toggling of termination protection as a strong indicator of malicious intent unless you can tie it to a known change window.

Why compromised IAM credentials are still winning (and why it’s worse in December)

Answer first: Credentials win because they blend in with normal automation, and year-end operations create noise—more changes, more exceptions, and slower human response.

December is a perfect storm for this kind of attack:

  • Teams run year-end jobs, migrations, and capacity tests
  • Engineering coverage is thinner around holidays
  • Finance teams are closing books, so cost anomalies can be noticed late
  • “Temporary exceptions” to IAM permissions often linger into Q1

Attackers don’t need you to be careless forever. They need you to be distracted for a day.

The hard truth: long-lived access keys plus broad permissions are an evergreen breach multiplier. Even organizations that “use MFA” often still have older programmatic keys floating around in CI logs, developer laptops, or third-party tools.

Where AI helps: detecting credential compromise before the bill hits

Answer first: AI improves cloud threat detection by correlating identity actions, API sequences, and resource behavior to spot misuse early—often before any mining is profitable.

Traditional alerting struggles here because each individual step can look legitimate:

  • List* and Describe* calls are common
  • DryRun checks are legitimate
  • Creating ECS clusters and registering task definitions can be normal
  • Autoscaling activity can be expected during traffic spikes

AI-based security is effective when it focuses on behavioral sequences and context, not single events.

A high-signal behavioral pattern (that you can detect)

A practical detection model (AI or rules-based, but AI does it better at scale) should flag sequences like:

  1. New/rare principal or IP uses IAM credentials
  2. Rapid enumeration across multiple services (IAM, ECS, EC2, STS)
  3. Permission validation behavior (DryRun or frequent AccessDenied probing)
  4. Sudden compute provisioning in atypical regions or instance families
  5. Container image pull from unfamiliar registries or unusual task commands
  6. Attempts to slow response (termination protection, new roles, permissive Lambda)

The key is time compression. Humans don’t normally perform six categories of actions across services in a few minutes. Scripts do. Attackers do.

What “good” AI-driven anomaly detection looks like in AWS

In practice, the best AI in cybersecurity for cloud environments does three things well:

  • Entity baselining: “What is normal for this IAM user/role?” including region, API mix, time of day, and typical services
  • Sequence modeling: “Is this API call chain consistent with known deployment pipelines, or consistent with abuse?”
  • Response recommendations: “If this is mining, here’s what to disable first to stop spend fast.”

If your detection system can’t explain why something is suspicious, responders waste time debating alerts instead of containing the threat.

The IAM controls that would have made this campaign much harder

Answer first: You reduce blast radius with least privilege, short-lived credentials, strong MFA, and guardrails on high-cost actions.

Attackers succeeded because the initial IAM credentials had admin-like permissions. So the fastest win is shrinking what any single identity can do.

Identity hardening checklist (practical, not aspirational)

Start with these changes because they reduce risk immediately:

  1. Eliminate long-term access keys where possible

    • Prefer short-lived, automatically rotated credentials
    • If you must keep keys, enforce rotation and monitor key age aggressively
  2. Require MFA for human users—and enforce it with conditions

    • Don’t stop at “MFA enabled.” Enforce MFA at policy level for sensitive actions.
  3. Least privilege for IAM principals (and remove wildcard admin where it’s not essential)

    • Separate “deploy” roles from “account admin” roles
    • Avoid giving developers permissions to create new IAM roles unless required
  4. Add guardrails for cost-amplifying actions

    • Tighten permissions for EC2 instance families (GPU/ML), autoscaling, and ECS cluster creation
    • Consider permission boundaries for roles used by CI/CD
  5. Lock down role and policy creation

    • Monitor and restrict CreateRole, AttachRolePolicy, CreateServiceLinkedRole
    • These are common steps in attacker “setup” and persistence

Container controls matter more than many teams admit

This campaign used a malicious container image to run mining code. If your organization treats container sources as “developer choice,” you’re leaving a gap big enough to drive an incident through.

Minimum viable container security controls:

  • Allowlists for approved registries/images in production
  • Scanning for suspicious entrypoints and shell bootstrap scripts
  • Alerting on unusual CPU allocations in ECS task definitions

Mining workloads have signatures: they want consistent CPU and long runtimes. That’s detectable.

Incident response: how to win the first 30 minutes

Answer first: Stop spend first, then remove access, then clean up resources—because miners can scale faster than you can investigate.

When crypto mining hits, your priorities aren’t the same as a data theft incident. Here’s the order I’ve found works best:

1) Contain identity and spend

  • Disable or quarantine the compromised IAM user/keys
  • Apply emergency SCPs or temporary deny policies for:
    • RunInstances, autoscaling actions, ECS cluster/service creation
  • Add billing and quota alarms if they aren’t already firing

2) Identify and reverse persistence blockers

  • Search for unexpected ModifyInstanceAttribute calls
  • Re-enable termination ability where disableApiTermination was set
  • Review newly created roles and Lambda functions (especially permissive ones)

3) Eradicate and recover

  • Terminate unauthorized ECS services, clusters, EC2 instances, and autoscaling groups
  • Rotate credentials broadly if you suspect lateral movement
  • Backfill detections: make sure the sequence that happened will alert next time

If your runbooks don’t include “termination protection reversal,” update them. This campaign won’t be the last to use it.

Questions security leaders should ask after reading this

Answer first: If you can’t answer these quickly, you’re relying on luck.

  • Which IAM users still have long-term access keys, and how old are they?
  • Who can create new roles and attach AWS-managed policies?
  • Can we detect DryRun permission checks followed by rapid compute provisioning?
  • Do we have automated containment for sudden autoscaling to triple digits?
  • Are container images in production restricted to approved sources?
  • Can our tooling respond when attackers toggle termination protection?

These aren’t theoretical governance questions. They’re “will we eat a five-figure cloud bill over a weekend?” questions.

What to do next if you want AI to carry more of the load

AI can’t fix sloppy permissions, but it’s excellent at spotting the moment credentials start behaving like an attacker script and triggering fast containment. If your cloud security posture still depends on periodic IAM reviews and manual triage, you’ll keep losing the speed race.

If you’re building an AI-driven cloud defense program, start small and measurable: pick credential compromise detection as the first use case, wire it to automated response for high-cost actions, and test it with tabletop scenarios that include persistence tricks like termination protection.

What would happen in your environment if an attacker got admin-like IAM credentials at 2 a.m. during holiday coverage—would you detect it in minutes, or in next month’s invoice?

🇺🇸 AI vs. Stolen AWS IAM Keys: Stop Crypto Mining Fast - United States | 3L3C