Crypto miners can go live in AWS in 10 minutes with stolen IAM keys. Learn how AI detects credential abuse early and stops cryptojacking fast.

Stop AWS Cryptojacking: AI Detects IAM Abuse Fast
A crypto miner doesn’t need a zero-day to ruin your quarter. It just needs one over-privileged AWS IAM credential and about 10 minutes.
That’s not hypothetical. A large AWS cryptomining campaign observed in late 2025 showed exactly how quickly attackers can move once they’ve got valid IAM access: enumerate, validate permissions, spin up compute at scale, and then add just enough persistence to slow your response while the bill climbs.
This post is part of our AI in Cybersecurity series, and I’m going to take a clear stance: cloud cryptojacking is primarily an identity security failure, and AI is one of the most practical ways to catch it early—because humans and static rules don’t keep up with the speed and creativity of credential abuse.
What this AWS cryptomining campaign teaches us
The lesson: if you treat IAM as “just access control,” you’ll miss that it’s also your primary attack surface in cloud environments.
AWS reported an ongoing campaign where attackers operated from external infrastructure, used compromised IAM user credentials with admin-like privileges, and had miners running within 10 minutes of initial access. The target services weren’t exotic—they were the same building blocks legitimate teams use every day: EC2, ECS/Fargate, Auto Scaling, IAM roles, and Lambda.
Here’s the part most teams underestimate: cryptojacking isn’t only about cost. It’s often a signal of broader account compromise. In this campaign, attackers also created an IAM user and granted Amazon SES full access, a setup that strongly suggests follow-on activity like phishing infrastructure.
The attack chain (simplified, but accurate)
Answer-first: The attacker workflow is “validate → provision → persist → expand.”
A clean way to think about the stages:
-
Discovery and permission validation
- Attackers probed EC2 quotas and tested permissions using the
RunInstancesAPI withDryRunenabled. DryRunmatters because it verifies permissions without actually launching instances—reducing cost, noise, and obvious evidence.
- Attackers probed EC2 quotas and tested permissions using the
-
Role creation to operationalize access
- They created service-linked roles and roles for Lambda / autoscaling.
- They attached common managed policies (for example, Lambda basic execution) to get functions running quickly.
-
ECS cluster and task-based mining
- Dozens of ECS clusters were created—sometimes more than 50 in a single attack.
- A malicious container image was registered and deployed to mine cryptocurrency on Fargate nodes.
-
EC2 mining at scale (including expensive instance families)
- Auto Scaling Groups were configured to scale aggressively (reported ranges like 20 to 999 instances).
- Activity targeted GPU, machine learning, compute-optimized, memory-optimized, and general-purpose instances.
-
Persistence to slow incident response
- Attackers used
ModifyInstanceAttributeto enable termination protection (disableApiTermination=True). - Result: responders can’t terminate instances via normal tooling until they reverse that attribute—buying attackers time.
- Attackers used
My take: the disableApiTermination step isn’t clever in a “wow” way. It’s clever in a “this will absolutely waste 45 minutes across three teams during an incident” way.
Why IAM credential abuse beats traditional detection
Answer-first: Credential abuse looks “valid” at the API layer, so rule-based detection often fires late.
When an attacker logs in with stolen keys, most classic controls struggle:
- No malware on endpoints (often). There may be nothing for EDR to catch.
- No vulnerability exploit signature. AWS explicitly framed this as not an AWS service vulnerability—attackers already had valid credentials.
- API activity can mimic legitimate automation. Creating clusters, registering task definitions, or launching instances is normal behavior in many orgs.
Security teams end up relying on:
- Manual CloudTrail review (slow)
- Static detection rules (brittle)
- Quota alarms (late)
- Cost alerts (too late, and financially painful)
This is exactly where AI-driven cloud threat detection earns its keep: it doesn’t need to “know” the attacker’s script in advance. It needs to recognize behavior that doesn’t fit your environment’s identity patterns.
Where AI helps most: detecting the “shape” of the attack
Answer-first: AI is strongest when it models normal identity and workload behavior, then flags high-confidence deviations in minutes.
If you want practical outcomes (not marketing), focus AI on four detection problems that map cleanly to this campaign.
1) Spotting anomalous API sequences (not single events)
Single events are noisy. Sequences are telling.
A strong example from this campaign is the chain:
RunInstanceswithDryRun- rapid IAM role creation
- ECS cluster creation burst
- task definition registration
- autoscaling configured unusually high
- termination protection enabled
An AI model can treat this as a behavioral fingerprint: not a fixed signature, but a pattern that rarely happens in legitimate workflows—especially within 10 minutes and from a new source network.
What to operationalize:
- Sequence-based detections (graph analytics help here)
- Time-to-actions scoring (how fast did they go from auth → compute?)
- “First time seen” combos (principal + API + region + service)
2) Detecting identity-context anomalies (who, where, how)
Most credential abuse has context mismatches:
- Unusual geo/IP/ASN compared to historical access
- New user-agent / SDK fingerprint
- First-time access to ECS/Auto Scaling by a principal that normally only touches S3 or IAM
- Sudden shift from read operations to high-impact write operations
AI-driven UEBA-style approaches work well here because they build baselines per:
- IAM user / role
- account
- environment (prod vs dev)
- time of week (yes, seasonality matters—December often brings change freezes, reduced staffing, and attackers know it)
3) Predicting “bill shock” events before the bill arrives
Cryptomining is one of the few attacks where cost is both impact and detection signal.
AI can forecast expected compute spend based on recent deployment behavior and flag:
- sudden GPU instance launches
- ECS task CPU/memory allocations that exceed norms
- Auto Scaling max desired counts that don’t match historical peaks
If your first alert is from Finance, you’ve already lost.
4) Catching persistence tricks that responders overlook
Termination protection is a perfect example: it’s a legitimate setting, rarely used, and devastating during response.
AI can help by flagging:
ModifyInstanceAttributecalls enablingdisableApiTermination- instances with termination protection that don’t match tagging standards
- changes performed by principals that have never performed instance-level attribute modifications
A useful internal rule of thumb: any termination protection change in an account that doesn’t require it for compliance should trigger human review within minutes.
Practical controls that stop this attack (and where AI fits)
Answer-first: You stop cryptojacking by tightening identity, constraining compute, and automating response—AI improves speed and prioritization.
This isn’t about buying a tool and hoping. It’s about a small set of controls that work together.
Identity hardening (prevent the initial foothold)
- Require MFA everywhere you can (especially for console access)
- Prefer temporary credentials over long-lived access keys
- Reduce admin-like IAM users; use roles and just-in-time elevation
- Rotate keys and aggressively eliminate unused access keys
- Enforce least privilege for:
ecs:CreateCluster,ecs:RegisterTaskDefinition,ecs:CreateServiceec2:RunInstances,autoscaling:*,iam:CreateRole,iam:PassRoleec2:ModifyInstanceAttribute
Where AI helps: it highlights which principals behave like “shadow admins” (broad access, broad usage) so you can prioritize refactoring.
Compute guardrails (limit blast radius)
Set hard constraints so a stolen credential can’t scale to 999 instances.
- Service quotas aligned to business reality (tighten GPUs and ML instances)
- SCPs or permission boundaries that restrict:
- high-cost instance families
- certain regions
- Auto Scaling max sizes
- Tag-based conditions: only allow instance launches with approved tags
Where AI helps: it identifies what “normal” capacity spikes look like so your guardrails don’t break legitimate incidents (like real seasonal traffic).
Container supply chain controls (reduce “one command deploys miner” risk)
This campaign abused a public container image. That’s common.
- Allowlist trusted registries and images
- Scan images continuously; fail deployment on high-risk findings
- Alert on new, never-before-seen images in production clusters
Where AI helps: it can cluster image behavior and runtime patterns, catching miners even when they’re repackaged.
Automated detection and response (win the 10-minute race)
You need a playbook that triggers before the miner has time to scale.
A strong minimal response flow:
- Detect anomalous IAM activity (GuardDuty-like signals + AI scoring)
- Quarantine the principal
- disable access keys
- revoke sessions where possible
- Contain compute
- scale ECS services to zero
- suspend Auto Scaling processes
- stop instances matching attacker patterns
- Reverse persistence
- re-enable API termination
- remove malicious roles/users/policies
- Hunt for follow-on actions
- SES configuration and sending activity
- unusual Lambda functions (especially public invoke permissions)
Where AI helps: it triages incidents by likely impact (cost + privilege + spread) so responders don’t waste time on low-risk anomalies.
“People also ask” quick answers
Is AWS cryptojacking usually caused by an AWS vulnerability?
No. The most common cause is compromised credentials or overly permissive IAM policies. The platform is doing what the credentials allow.
What’s the fastest indicator of cryptomining in AWS?
A rapid burst of ECS task deployments or EC2 launches, often paired with unusual instance families (GPU/ML) and sudden CPU utilization spikes.
What’s the most overlooked persistence setting in EC2 incidents?
Termination protection via disableApiTermination=True. It slows containment and breaks a lot of “terminate first” automation.
What to do next if you want to prevent IAM-driven cryptojacking
This AWS campaign is a clean case study: once valid IAM credentials are in play, attackers can turn your cloud into their compute farm faster than most teams can open a ticket.
If you’re building an AI-driven cloud defense program, start with the boring truth: identity telemetry is your best early-warning system. Model normal IAM behavior, score anomalies in context, and automate the first containment steps. That’s how you win back the time attackers are stealing.
If you want a pragmatic next step, run a short internal exercise this week:
- Identify the top 10 IAM principals by privilege and usage
- Map which ones can create/scale compute and pass roles
- Decide what “impossible” behavior looks like (region, instance family, scaling limits)
- Then implement detections and automated containment around those constraints
Where would an attacker get the most “instant compute” in your AWS account—and how quickly would you know it happened?