Stop Cryptomining Fast: Detect Stolen AWS Credentials

AI in Cybersecurity••By 3L3C

Stolen AWS credentials can enable cryptomining in minutes. Learn the signals to watch and how AI anomaly detection can contain abuse fast.

AWSIAMCloud SecurityCryptominingThreat DetectionAI Security Operations
Share:

Featured image for Stop Cryptomining Fast: Detect Stolen AWS Credentials

Stop Cryptomining Fast: Detect Stolen AWS Credentials

A well-run cryptomining attack doesn’t look like “hacking.” It looks like normal cloud usage—new instances spinning up, containers starting, roles being created—until your bill spikes and your team realizes they’ve been paying for someone else’s revenue stream.

That’s why the recent AWS-reported campaign is such a useful case study for this AI in Cybersecurity series: attackers used stolen, valid AWS IAM credentials (not a cloud platform vulnerability) and were able to get unauthorized mining running in roughly 10 minutes after initial access. If your detection depends on humans noticing odd dashboards hours later, you’re already behind.

Here’s the stance I’ll take: credential misuse is an anomaly-detection problem, and AI-driven monitoring is one of the most practical ways to catch it early—especially in fast-moving AWS environments where “normal” changes constantly.

What this cryptomining campaign teaches (and why it’s hard to spot)

The core lesson is simple: the attacker didn’t need malware on laptops or a zero-day in AWS—they needed your keys. With valid IAM access, they could operate through standard AWS APIs and blend into legitimate operational noise.

In the AWS-described campaign, the actor:

  • Probed customer environments using AWS APIs
  • Checked EC2 service quotas to see how big they could go
  • Used DryRun calls (for RunInstances) to confirm permissions without launching anything
  • Created roles to enable scaling and automation
  • Deployed miners across EC2 and ECS

This isn’t “loud” in the traditional sense. In many orgs, creating roles, launching compute, and running containers happens all day.

Why cryptomining is a favorite operational attack

Cryptomining (cryptojacking in cloud environments) is attractive because it’s:

  • Fast to monetize: compute becomes cash quickly
  • Low-friction: attackers use your existing cloud control plane
  • Often under-triaged: teams treat cost anomalies as finance problems, not security incidents

And December is a particularly painful time to discover it. End-of-year release pushes, capacity planning, and holiday staffing gaps mean unusual activity gets explained away as “the crunch.” Attackers count on that.

The attacker playbook: stolen credentials + cloud-native speed

If you want detection that actually works, you need to model what the attacker does next after they get access. This campaign is a clean example because it shows a recognizable sequence.

Stage 1: Recon that looks like routine automation

The actor checked EC2 quotas and repeatedly tested permissions with DryRun. That’s clever for two reasons:

  1. They learn exactly what they can do without triggering big cost or obvious resource creation.
  2. They reduce their footprint while mapping your environment.

A human reviewer looking at raw logs may see “API calls happened.” AI-assisted analytics can do better: it can score whether these calls are normal for this principal, at this time, from this network origin, in this sequence.

Stage 2: Role creation to enable scale

The actor created IAM roles (including a service-linked role for auto scaling) and a Lambda role, then attached a basic execution policy.

This is where static rules often fail. Plenty of teams legitimately create roles and policies—especially in organizations shipping quickly or migrating workloads.

What’s “off” is the combination:

  • A principal that doesn’t usually create roles starts doing so
  • Role creation is followed by compute provisioning activity
  • The activity originates from an unusual IP/ASN or geography
  • Actions occur in a tight, automated burst (a common bot pattern)

Stage 3: Mining deployed across EC2 and ECS in ~10 minutes

That deployment speed should change how you think about response.

If your containment steps require a meeting, a ticket, and a handoff, the attacker will have miners running (and scaling) long before you act. For cloud attacks like this, speed beats perfection: detect early, contain automatically, then investigate.

The persistence twist: termination protection as an IR disruptor

One of the more interesting details in the campaign was the use of instance termination protection via ModifyInstanceAttribute with “disable API termination” set to true.

That doesn’t make the attack stealthier. It makes your response slower.

Here’s the practical impact:

  • Automated cleanup that assumes it can terminate instances may fail
  • Responders have to first re-enable termination, then delete
  • Attackers buy extra time for miners to run, especially if teams are understaffed

This is the kind of tactic AI-enabled response platforms can handle well: if the system detects high-confidence cryptomining indicators, it can execute a multi-step playbook (remove termination protection → quarantine security groups → snapshot for forensics → terminate) without waiting for manual intervention.

Where AI-driven cloud threat detection pays off

AI in cybersecurity isn’t magic. But in cloud environments, it’s extremely good at one job: detecting behavior that doesn’t fit. And stolen-credential attacks are all about behavior.

What to detect: three “stolen AWS credential” signals that matter

If you only remember three detection ideas from this post, make them these:

  1. Uncharacteristic API sequences

    • Example: GetServiceQuota → repeated RunInstances with DryRun → role creation → rapid compute deployment.
    • The sequence matters more than any single call.
  2. Identity behavior drift (principal-level baselines)

    • A user, role, or access key starts doing things it has never done before: creating roles, touching ECS, launching spot fleets, changing instance attributes.
  3. Cost-amplifying actions executed at machine speed

    • Bursty provisioning, especially across regions or in unusual instance families, is a strong indicator of automation-driven abuse.

A good AI-driven monitoring approach doesn’t just alert on these; it assigns confidence based on multiple weak signals and their timing.

Practical AI + rules combo that works in AWS

I’ve found the best results come from hybrid detection:

  • Deterministic rules for high-signal events (example: termination protection enabled on newly created instances by a principal that rarely touches EC2)
  • Machine-learning anomaly detection for behavior drift and unusual sequences
  • Graph/relationship analytics to connect identities, roles, policies, and resources created within the same “attack window”

This combination reduces false positives while still catching novel attacker tradecraft.

How to harden AWS IAM so stolen credentials don’t become admin power

Detection is vital, but the fastest “win” is shrinking what stolen credentials can do.

Prefer temporary credentials over long-lived access keys

Long-term access keys are a recurring root cause in cloud incidents because:

  • They get committed to repositories
  • They get copied into build logs
  • They get reused across tools and contractors

Temporary credentials (short-lived, scoped) reduce the blast radius dramatically.

Require MFA where humans log in—and minimize human IAM in automation paths

MFA helps for interactive access, but don’t stop there. The deeper problem is overpowered principals. If one stolen credential yields admin privileges, the attacker’s job is basically done.

Enforce least privilege with real testing, not guesswork

Least privilege fails when it’s treated as a spreadsheet exercise.

A better approach:

  • Start from what workloads actually do (from logs)
  • Generate candidate policies
  • Validate them in staging
  • Roll out with guardrails

This is another spot AI can help: summarizing common action patterns per role and highlighting permissions that are never used.

A detection and response checklist you can use this week

If you’re responsible for AWS security operations, here’s a concrete checklist aligned to this campaign’s tactics.

Detection (GuardDuty/CloudTrail/SIEM/SOAR)

  • Alert on repeated DryRun RunInstances attempts, especially from new IPs/ASNs
  • Alert on GetServiceQuota calls followed closely by provisioning actions
  • Detect new IAM role creation + policy attachment bursts (same actor, short time window)
  • Monitor for ModifyInstanceAttribute changes enabling termination protection
  • Flag unusual spot instance naming patterns and rapid scaling behavior

Response automation (fast containment matters)

  • Automatically quarantine suspicious instances (restrict egress, isolate VPC/security group)
  • Remove termination protection when malicious patterns are confirmed
  • Snapshot disk/volumes for forensics before termination
  • Rotate or disable the suspected access keys and invalidate sessions
  • Create a “break-glass” playbook for mass instance termination during cost-explosion events

A useful internal metric: time-to-containment for cloud credential misuse. If it’s measured in hours, miners will finish a profitable run.

People also ask: “How do I know it’s cryptomining and not a big batch job?”

The clean answer: intent shows up in the combination of network indicators, runtime artifacts, and identity behavior. Batch jobs usually have predictable schedules, known images, known repositories, and stable IAM roles.

Cryptomining in AWS commonly correlates with:

  • Unknown container images or newly created images with no internal provenance
  • Outbound traffic to known mining pool patterns/domains
  • Compute that appears suddenly, scales quickly, and lacks expected app telemetry
  • Roles and policies created solely to enable rapid provisioning

This is exactly where AI-assisted triage helps: it can compare the event to known internal “good” deployments and highlight what doesn’t match.

What to do next (and the question your team should answer)

This AWS cryptomining campaign is a reminder that cloud security is identity security, and identity security is increasingly a data problem: too many logs, too many actions, too little time.

If you’re building an AI-driven security program, credential misuse detection is one of the highest-ROI starting points because it directly reduces:

  • Cloud spend from unauthorized compute
  • Incident response time spent on log archaeology
  • Business disruption from emergency shutdowns

Next step: pick one AWS account (or one business unit) and implement a measurable goal—detect suspicious compute provisioning within 5 minutes and contain within 15—using a mix of anomaly detection and automated playbooks.

The question worth debating in your next security ops meeting: If an attacker got valid AWS credentials right now, would you stop the first miner before it runs long enough to matter?