How AI Flags AWS IAM Crypto Mining in Minutes

AI for Dental Practices: Modern Dentistry••By 3L3C

AI-driven detection can spot AWS IAM crypto mining patterns in minutes. Learn the attack chain, persistence tricks, and practical controls to stop scale-out fast.

AWS SecurityIAMCloud Threat DetectionCryptominingECSIncident Response
Share:

Featured image for How AI Flags AWS IAM Crypto Mining in Minutes

How AI Flags AWS IAM Crypto Mining in Minutes

On November 2, 2025, AWS threat detections began lighting up for a pattern that should make every cloud team a little uncomfortable: a threat actor used valid, admin-like IAM credentials and had crypto miners running within about 10 minutes. No exploited AWS vulnerability. No exotic zero-day. Just credentials that worked.

Most companies get this wrong: they treat cloud cryptomining as a “cost problem” and IAM credential misuse as a “security problem.” Attackers don’t separate them. This campaign shows exactly why. The same stolen access that spins up miners can also create roles, plant persistence, and set up email capability for phishing.

This matters because the fastest wins for attackers are often the simplest ones. If your security posture relies on “we’ll see it in the console” or “we’ll terminate the instances,” you’re already behind—especially when the attacker uses persistence tricks designed to slow you down. The better way to approach this is AI-assisted cloud detection and response: detect abnormal identity behavior early, validate intent, and contain automatically before the bill (and blast radius) explodes.

What happened in this AWS cryptomining campaign (and why it scales)

Answer first: The attacker used compromised IAM credentials to enumerate permissions, confirm they could launch compute, then rapidly deployed miners across ECS and EC2—while adding persistence that made cleanup harder.

According to AWS’s analysis of the campaign, the adversary operated from an external hosting provider and followed a repeatable playbook:

  1. Discovery and permission validation

    • Enumerated environment resources and quotas.
    • Called RunInstances using the DryRun flag to verify launch permissions without actually launching anything.
  2. Role and service setup

    • Created roles using CreateServiceLinkedRole and CreateRole.
    • Attached a baseline Lambda execution policy to support automation and persistence.
  3. ECS cryptomining deployment

    • Created dozens of ECS clusters (in some cases, 50+ clusters in a single incident).
    • Registered a task definition referencing a malicious container image.
    • Launched mining on ECS Fargate.
  4. EC2 scaling for maximum consumption

    • Created autoscaling groups configured to grow from 20 up to 999 instances.
    • Targeted a broad set of instance families, including GPU and machine learning instances.

The pattern is simple: confirm access, spread fast, consume quotas, and make removal annoying.

Why DryRun is a quiet gift to attackers

Answer first: DryRun lets attackers test whether they have power without spending money or leaving the same footprint as an actual launch.

Think about what that means operationally. If you’re alerting heavily on instance creation but lightly on permission probing, the attacker gets a “green light” with lower detection risk. That’s a gap AI-driven detection can close by correlating intent signals (identity + API sequence + timing), not just counting resource events.

The persistence trick that breaks “just terminate it” response

Answer first: The campaign used EC2 termination protection (disableApiTermination=True) to block easy termination and disrupt automated remediation.

One detail stands out because it’s so pragmatic. The actor used ModifyInstanceAttribute with disableApiTermination set to True. That prevents termination through the console, CLI, or API until responders explicitly re-enable termination.

This isn’t clever in a lab. It’s clever in a real incident, at 2 a.m., when your on-call engineer is trying to stop the bleeding. Termination protection:

  • Slows down containment
  • Breaks “kill switch” runbooks
  • Causes automation to fail silently or loop
  • Buys the attacker more mining time

If your containment strategy is mostly “terminate anything suspicious,” you need a parallel path: policy-level controls (deny the action), identity containment (disable keys/roles), and network containment (egress restrictions) so the instance can’t do useful work even if it can’t be terminated immediately.

Persistence via Lambda and SES: not just mining

Answer first: This campaign also laid groundwork for continued access and potential phishing by creating permissive Lambda and an IAM user with SES privileges.

AWS observed additional persistence behaviors, including:

  • A Lambda function configured so it could be invoked broadly (dangerous in the wrong hands)
  • A new IAM user (notably named in the reporting as user-x1x2x3x4) with AmazonSESFullAccess, enabling high-volume email sending

That’s the uncomfortable truth: cryptomining is often the first monetization step, not the last. If an actor can create identities and grant SES permissions, you’re looking at a path to internal phishing, customer targeting, or brand abuse.

Where AI helps: detecting IAM misuse before the miners land

Answer first: AI is most effective here when it models behavior, not just indicators—flagging abnormal identity sequences (like DryRun → role creation → cluster sprawl) and triggering automated containment.

Traditional detection often struggles with cloud attacks because the “malware” is mostly API calls. The attacker’s tools look like automation. Their persistence looks like infrastructure-as-code. That’s why sequence and context matter.

Here’s what AI-driven cloud security can do better than static rules alone:

1) Spot “impossible operator behavior” in IAM activity

Answer first: AI can baseline how admins and CI systems normally behave, then flag deviations that humans miss.

Examples that should trigger high-confidence alerts when clustered together:

  • A principal that rarely touches EC2 suddenly calling RunInstances with DryRun
  • Role creation (CreateRole, AttachRolePolicy) from a source network never seen before
  • ECS cluster creation bursts (e.g., 10+ clusters in minutes) when your org typically has stable cluster counts
  • New autoscaling groups with extreme max values (like 999) that don’t match normal capacity planning

A good model doesn’t need to “know” it’s crypto mining immediately. It needs to know this is not how your environment behaves.

2) Detect cryptomining patterns across ECS and EC2

Answer first: AI correlates signals across services—identity actions, container behavior, and compute consumption—to catch mining even when any single signal looks ambiguous.

Mining in the cloud tends to create a recognizable footprint:

  • Sudden, sustained CPU/GPU saturation
  • Unusual container images pulled from public registries (especially new-to-you images)
  • Task definitions with unexpected CPU/memory allocations
  • Rapid scale-out events that don’t align with traffic or business cycles

In December, cloud teams often run end-of-year jobs—data pipelines, model training, seasonal e-commerce spikes. AI helps by correlating business context and historical usage so you don’t drown in false positives while still catching abuse.

3) Automate containment that attackers can’t easily “out-click”

Answer first: The fastest containment move is identity containment—AI-guided response should disable or constrain the compromised principal, not chase instances one by one.

Automation worth having in your playbooks:

  • Disable or rotate access keys for the suspected compromised IAM user
  • Revoke active sessions where possible
  • Apply a temporary permissions boundary or explicit deny for:
    • ec2:RunInstances
    • ecs:CreateCluster, ecs:RegisterTaskDefinition, ecs:CreateService
    • iam:CreateRole, iam:AttachRolePolicy
    • ec2:ModifyInstanceAttribute (especially termination protection changes)
  • Quarantine by tagging and applying restrictive SCPs (in multi-account orgs)
  • Egress controls: block mining pool traffic patterns where feasible

If the attacker can’t create new compute, can’t create new roles, and can’t exfiltrate or communicate outward, the incident becomes a cleanup exercise—not an ongoing fire.

A practical stance: respond to cloud cryptomining like credential theft first, cost anomaly second.

A prevention checklist that actually maps to this attack chain

Answer first: Stop long-lived keys, shrink permissions, and put guardrails on “expensive” and “sticky” actions like scaling and termination protection.

Here’s a targeted checklist aligned to what the actor did—use it as a gap assessment.

Identity hardening (blocks initial access and escalation)

  • Eliminate long-term access keys wherever possible; use temporary credentials.
  • Enforce MFA for every human IAM principal.
  • Apply least privilege aggressively (especially for principals that can create roles or compute).
  • Require approvals or just-in-time access for admin-like permissions.

Guardrails for compute abuse (limits blast radius)

  • Set sane service quotas and alerts for EC2, ECS, and Fargate usage.
  • Detect and block autoscaling configs with unrealistic ceilings (for many orgs, “999” should be automatically rejected).
  • Monitor for ECS task definitions requesting unusual CPU/memory compared to baseline.

Container controls (catches the delivery mechanism)

  • Enforce image allowlists or verified registries for production.
  • Scan images and flag suspicious entrypoints (like “download script then execute”).
  • Alert when a never-before-seen public image is used in a sensitive account.

Logging and detection (makes investigation possible)

  • Turn on centralized API logging across accounts.
  • Enable managed threat detection and wire alerts to response automation.
  • Ensure high-fidelity logging for IAM, EC2, ECS, and Lambda events.

Response readiness (prevents “termination protection” from stalling you)

  • Prebuild runbooks to re-enable termination safely.
  • Add explicit detection for ModifyInstanceAttribute changes involving termination protection.
  • Make identity containment the first button you press.

What to do if you suspect AWS IAM credentials are compromised

Answer first: Assume the attacker is still active, contain identity access immediately, then hunt for persistence across IAM, ECS, EC2, and Lambda.

A practical triage order I’ve found effective:

  1. Contain

    • Disable/rotate credentials for suspected principals.
    • Apply temporary denies on compute and IAM role creation.
  2. Stop the spend

    • Identify and halt autoscaling groups and ECS services created in the last few hours.
    • Reduce scale targets to zero where appropriate.
  3. Hunt persistence

    • New IAM users, new access keys, newly attached managed policies
    • Newly created roles and trust policies that allow broad assumption
    • Lambda functions with overly permissive invocation permissions
  4. Eradicate and recover

    • Remove unauthorized resources
    • Re-enable termination where needed
    • Fix the initial credential exposure path (CI logs, leaked keys, compromised developer machine, etc.)

If you only delete miners but leave a backdoor IAM user or role behind, you’re going to repeat the incident.

Where this is heading in 2026: faster abuse, more “normal” signals

Cloud attackers are getting better at blending in. The next iteration won’t just create 50 clusters loudly. It’ll create two clusters that look like your naming convention, scale gradually, and run at night. That’s exactly why AI in cloud security matters: it’s built to detect the weirdness in combinations—identity behavior, API sequences, and resource consumption—when each signal alone feels explainable.

If you want one primary takeaway from this AWS cryptomining campaign, it’s this: valid credentials are the new remote code execution. Treat IAM as your front line, and use AI-driven detection and automated containment to keep a 10-minute incident from becoming a week-long cleanup.

If your environment had a similar burst of DryRun calls, role creation, and sudden ECS/EC2 scale-out this week, would you catch it—and would you shut it down automatically—or would you notice when Finance forwards the bill?