Stop Stolen AWS Credentials From Fueling Cryptomining

AI in Cybersecurity••By 3L3C

Stolen AWS credentials can spin up cryptominers in 10 minutes. Learn the attacker sequence and how AI-driven anomaly detection stops cloud abuse fast.

AWS securityIAMcryptominingcloud threat detectionAI security operationsincident response
Share:

Featured image for Stop Stolen AWS Credentials From Fueling Cryptomining

Stop Stolen AWS Credentials From Fueling Cryptomining

A cryptomining hijack doesn’t need a fancy zero-day. It needs one valid AWS credential and about 10 minutes.

That’s the part that should make security teams uneasy: this kind of campaign looks “legitimate” at first glance because the attacker is using real IAM permissions, real APIs, and real cloud services—just for the wrong purpose. And when it hits in late December, it tends to land at the worst possible time: reduced staffing, change freezes, and year-end budget pressure.

This post is part of our AI in Cybersecurity series, and I’m going to take a firm stance: if your cloud detection strategy can’t spot stolen-credential behavior fast, you’ll keep paying for attackers’ compute. The good news is that this is exactly where AI-driven anomaly detection and automation shine—because the attacker’s workflow has patterns.

What this campaign teaches: stolen IAM beats “patched and safe”

This campaign is a clean case study in how cloud attacks actually happen in 2025: attackers didn’t break AWS—they logged in. They used compromised AWS Identity and Access Management (IAM) credentials to abuse Amazon EC2 and Amazon ECS for cryptomining across multiple customer environments.

The operational lesson is simple: you can be fully patched and still get wrecked if identity is weak. Cloud security teams often over-index on vulnerability management and under-invest in:

  • Detecting suspicious API usage patterns n- Controlling privilege growth (role creation, policy attachments)
  • Rapid response that can keep up with automated adversaries

In the shared responsibility model, this sits squarely in the customer’s domain: credential hygiene, access governance, monitoring, and response.

Why cryptomining is still a top cloud abuse case

Cryptomining (cryptojacking) keeps coming back because it’s low drama and high margin:

  • It doesn’t require exfiltration or extortion to monetize.
  • It blends into “normal” cloud consumption until bills spike.
  • It can be automated end-to-end with scripts.

And the impact isn’t just cost. Mining workloads can:

  • Starve production resources and degrade performance
  • Trigger autoscaling chaos and incident fatigue
  • Mask other attacker activity (credential testing, persistence setup)

The attacker’s playbook (and the signals AI can spot)

The fastest way to defend against this isn’t memorizing indicators. It’s understanding the sequence. In this campaign, the attacker flow had a recognizable rhythm: recon → permission validation → role setup → deployment → persistence.

Step 1: Recon with service quotas and “DryRun”

Attackers checked EC2 service quotas (for example, GetServiceQuota) to figure out how much compute they could spin up. Then they tested permissions using RunInstances calls with the DryRun flag.

That detail matters. DryRun is a low-noise way to confirm “Can I do this?” without actually launching instances.

AI detection angle: A human admin rarely does repeated DryRun calls as part of normal operations—especially not from unfamiliar IP ranges, at odd times, or across multiple regions.

What I’ve found works in practice is treating this as a behavioral motif: quota checks + repeated dry runs + new role creation within a short window is a strong “stolen credential in motion” signature.

Step 2: Create roles for scale and automation

The attackers used APIs such as:

  • CreateServiceLinkedRole (to support auto scaling groups)
  • CreateRole (to support AWS Lambda)

Then they attached AWSLambdaBasicExecutionRole to the Lambda role.

This isn’t random. It’s how you build an environment that can automate actions and persist.

AI detection angle: Role creation isn’t inherently bad; unexpected role creation is. AI models that baseline each account’s “normal” identity change rate can flag:

  • Role creation by principals that don’t normally manage IAM
  • Sudden bursts of policy attachments
  • New roles created shortly after a new IP/ASN appears

Step 3: Deploy miners across EC2 and ECS—fast

Once reconnaissance and setup were done, the mining resources were deployed across EC2 and ECS, operational within roughly 10 minutes of initial access.

AI detection angle: Speed is the tell. Attackers compress the timeline. AI-assisted SOC workflows should treat “first seen credential + infrastructure create events” as a priority-0 path.

If your current process waits for:

  • a daily report,
  • a cost anomaly the next morning, or
  • someone noticing a CPU chart,

you’re too late.

Step 4: Persistence through termination protection

A notable persistence tactic in this campaign was enabling “disable API termination” via ModifyInstanceAttribute. That forces responders to explicitly re-enable termination before they can delete attacker-launched instances.

This is a very practical attacker move: it disrupts both humans and automated cleanup.

AI detection angle: Termination protection changes are rare in many orgs. That makes them excellent anomaly candidates—particularly when they occur shortly after instance creation.

Snippet you can operationalize: “New instances + termination protection enabled shortly after creation is a high-confidence cryptomining persistence pattern.”

Detection that works: combine AI anomaly scoring with hard guardrails

AI is strongest when it’s paired with controls that limit blast radius. You want both:

  1. Hard guardrails that prevent or constrain abuse
  2. AI-driven detection that recognizes suspicious sequences early

Here’s a practical approach.

Guardrails: make stolen credentials less useful

Start with identity basics, but do them aggressively:

  • Prefer temporary credentials over long-term access keys wherever possible.
  • Enforce MFA for all users, especially for any console access.
  • Reduce permissions: keep IAM principals to only required actions, especially around:
    • iam:CreateRole, iam:AttachRolePolicy, iam:PassRole
    • ec2:RunInstances, ecs:RunTask, ecs:RegisterTaskDefinition
    • lambda:CreateFunction, lambda:UpdateFunctionCode

If you can’t remove sensitive permissions, put them behind tighter conditions:

  • Require access from approved networks or identity-aware proxies
  • Require explicit change tickets using tags (and enforce with SCPs where appropriate)
  • Separate “build” roles from “run” roles

AI detection: score the sequence, not just the event

Most teams already have alerts for “new access key created” or “instance launched.” The miss is failing to connect the dots.

AI-driven threat detection in cloud environments works best when it:

  • Baselines normal IAM and API behavior per account/team
  • Scores event sequences over short time windows
  • Learns what “normal automation” looks like (CI/CD, Terraform, scheduled jobs)

A simple scoring model (even before fancy ML) can flag:

  1. Unfamiliar principal or IP makes GetServiceQuota
  2. Same actor makes repeated RunInstances DryRun
  3. Same actor creates IAM roles and attaches policies
  4. Instances/tasks launch in bursts (especially spot)
  5. Termination protection is enabled

Treat that chain as a single incident, not five separate alerts.

Logging: centralize, normalize, and keep it searchable

If you’re trying to investigate across scattered logs, you’re going to lose time.

At minimum, centralize and retain the events that explain identity and compute activity:

  • Cloud API activity logs (for example, CloudTrail-equivalent events)
  • Authentication context (source IP, user agent, MFA usage)
  • ECS task definition changes and image pulls
  • Instance lifecycle events (create/stop/terminate, attribute changes)

The goal isn’t “collect everything.” The goal is answering two questions fast:

  • How did they get in?
  • What did they create and how do we remove it?

Incident response for cryptomining: a 30-minute playbook

When cryptomining hits, cost pressure makes people rush—and rushing causes mistakes. Use a short playbook that assumes persistence tricks.

First 10 minutes: contain identity

  1. Identify the suspicious principal(s).
  2. Disable or rotate compromised access keys.
  3. Revoke active sessions where applicable.
  4. Add temporary deny rules for risky actions (RunInstances, CreateRole, PassRole) if you can do it safely.

Next 10 minutes: stop compute safely

  1. Enumerate new EC2 instances and ECS tasks launched in the suspicious window.
  2. Check for termination protection and remove it where present.
  3. Stop/terminate resources and disable autoscaling artifacts created by the attacker.

Final 10 minutes: eradicate and prevent re-entry

  1. Remove attacker-created roles, policies, and trust relationships.
  2. Search for additional persistence hooks (Lambda functions, scheduled rules, user data scripts).
  3. Add detections around the exact sequence you observed.

The teams that do well here don’t just “clean up.” They codify the pattern into detection and guardrails.

Indicators you can hunt for (and how AI helps prioritize them)

Indicators of compromise are useful, but only if you treat them as starting points.

For this campaign, defenders were advised to look for:

  • Container images used for miner deployment (one reported example was pulled from a public container registry and later removed)
  • Mining pool-related domains (multiple region-coded endpoints were observed)
  • Naming patterns for launched instances (spot vs on-demand conventions)

AI improves IoC usage by:

  • Correlating IoCs with who created what and from where
  • Prioritizing hits that occur alongside unusual IAM behavior
  • Reducing false positives when a benign domain string appears in unrelated logs

If you’re only matching strings, you’ll burn time. If you’re correlating behavior, you’ll catch the real compromise.

If you want fewer cloud incidents, treat IAM as your primary attack surface

Stolen AWS credentials powering cryptomining is not a niche threat. It’s the predictable outcome of three things happening at once:

  • Long-lived credentials exist
  • High-privilege actions are broadly available
  • Detection is slow or disconnected across tools

AI in cybersecurity is most valuable here because cloud attackers are automated. You need detection and response that can move at the same speed.

If you’re building your 2026 security roadmap right now, prioritize projects that reduce time-to-detect for identity abuse:

  • Behavioral analytics for cloud API usage
  • Automated response for high-confidence sequences (quota recon → DryRun → role creation → compute burst)
  • Stronger governance for non-human identities and role sprawl

The forward-looking question worth asking your team before the next on-call weekend: If an attacker gets one valid cloud credential tonight, do we stop them before minute 10?