Stop AWS Cryptomining Fast With AI Detection

AI for Dental Practices: Modern Dentistry••By 3L3C

AI-driven monitoring can spot stolen AWS credentials and cryptomining behavior in minutes—before cloud costs spike. Learn signals, tactics, and defenses.

aws-securitycloud-securityiamcryptojackingai-threat-detectionincident-response
Share:

Stop AWS Cryptomining Fast With AI Detection

Attackers don’t need a zero-day to drain your cloud budget. They just need one set of valid AWS credentials.

A recent cryptomining campaign showed how quickly this can turn into real damage: after initial access, miners were running across Amazon EC2 and Amazon ECS in about 10 minutes. That’s not a “we’ll look at it after the holidays” problem. It’s a “do we have automated detection and response?” problem.

The uncomfortable truth is that most cloud teams still treat stolen credentials as an IAM hygiene issue instead of an operations and detection issue. Strong IAM matters, but it’s not enough. This post breaks down how these AWS cryptomining intrusions work, what signals show up early, and where AI-driven monitoring consistently beats rule-only approaches.

What this AWS cryptomining campaign tells us (and why it’s repeatable)

This campaign is straightforward to copy because it relies on things AWS is designed to allow: API calls, roles, and compute provisioning. No exotic exploitation required.

Attackers used compromised AWS IAM credentials to gain administrative ability in customer accounts, then spread cryptominers across EC2 and ECS. The speed matters, but the method matters more: the attacker behaved like a power user with automation.

Here’s the pattern security teams should internalize: cloud cryptomining is rarely about stealthy malware—it's about abusing legitimate cloud controls at high velocity. If your detection strategy is “we’ll spot weird binaries on hosts,” you’ll be late.

The shared responsibility model doesn’t save you

Cloud providers secure the underlying infrastructure. If an attacker signs in with valid credentials and calls RunInstances, that’s your problem. The good news is you also have a huge advantage: in AWS, nearly every action has an API event trail you can analyze.

If you treat your AWS audit logs as “compliance evidence,” you’re leaving detection power on the table.

The attacker playbook: reconnaissance, build-out, blast radius

This campaign followed a clean sequence that shows up across many credential-theft incidents.

Step 1: Permission probing that looks “almost normal”

The actor began by probing what the account could do, including checking EC2 service quotas and validating permissions with repeated RunInstances calls using the DryRun flag.

That DryRun detail is a gift to defenders.

  • It’s deliberate reconnaissance
  • It validates privileges without launching resources
  • It reduces immediate costs and can reduce noisy signals

AI is strong here because the right question isn’t “is DryRun bad?” It’s “is this user, from this network, at this time, calling DryRun repeatedly in a way we’ve never seen for them?”

Step 2: Creating roles to scale and persist

The attacker created IAM roles used later for impact and persistence, including:

  • CreateServiceLinkedRole (for Auto Scaling groups)
  • CreateRole (for AWS Lambda)
  • Attaching AWSLambdaBasicExecutionRole

This is a common attacker move: create new identities that outlive the initial stolen key.

A lot of teams alert on “root account used” and miss the quieter story: new roles created outside your change windows and policies attached in combinations you don’t normally deploy.

Step 3: Rapid deployment across EC2 and ECS

Once reconnaissance and role setup were complete, the attacker deployed cryptominers across:

  • Amazon EC2 (including Spot and On-Demand patterns)
  • Amazon ECS (containerized mining)

In the campaign, miners were operational within about 10 minutes.

If your mean time to detect in cloud is measured in hours, you’ll feel this one in Finance first.

Step 4: Persistence designed to slow responders

One of the more frustrating tactics was enabling termination protection by setting “disable API termination” to true using ModifyInstanceAttribute.

That forces responders to re-enable termination before cleanup.

This is a good example of attacker realism: cryptomining isn’t about perfect invisibility; it’s about staying online long enough to profit while defenders waste time.

Indicators that matter: what to look for in AWS logs and workloads

Rules and IoCs still help, especially for fast triage. AWS highlighted several concrete signals from the campaign:

  • Malicious container image usage (example observed: a Docker Hub image named yenik65958/secret, since removed)
  • Known cryptomining-related domains used in the operation (examples observed: asia[.]rplant[.]xyz, eu[.]rplant[.]xyz, na[.]rplant[.]xyz)
  • Naming conventions for instances that suggest mining fleets:
    • SPOT-us-east-1-G*-*
    • OD-us-east-1-G*-*

You should absolutely monitor for patterns like these. But don’t stop there, because attackers rotate infrastructure.

The better approach: detect the behavior, not just the artifact

IoCs rot fast. Behaviors last.

Behavioral signals that are strong early warnings for AWS cryptomining attacks include:

  • Burst of DryRun calls followed by real RunInstances
  • New IAM roles created and policies attached outside normal pipelines
  • Sudden changes to instance attributes like termination protection
  • ECS task definitions or services created from unfamiliar registries/images
  • Unexpected spikes in GPU/compute-heavy instance families or sudden Spot usage
  • Network egress to mining pool patterns or unusual DNS lookups from compute nodes

AI-driven analytics can model “normal” for each principal, VPC, region, and workload class, then surface deviations that matter.

Where AI fits: turning noisy cloud telemetry into a 10-minute response

If your team already has GuardDuty, CloudTrail, VPC Flow Logs, and container telemetry, you’re not lacking data. You’re lacking a way to connect the dots at speed.

AI helps in three specific ways that map directly to this kind of incident.

1) Anomalous credential usage detection

Valid credentials are the new malware. AI-based detection can flag:

  • Impossible or unusual travel patterns (sudden geo/provider changes)
  • First-seen API sequences for a principal (for example, quota checks + repeated DryRun)
  • API calls at unusual hours relative to that identity’s baseline
  • Credential use from infrastructure you don’t normally use (new hosting provider ASN patterns)

A practical stance: if you can’t baseline IAM principals (humans and non-humans), you’re guessing.

2) Sequence-based detection (the “attack chain” view)

Most cloud alerts fail because they’re isolated:

  • “A role was created” (so what?)
  • “An instance was launched” (that happens all day)
  • “A policy was attached” (could be normal)

The value comes from recognizing the sequence:

Reconnaissance (GetServiceQuota, DryRun) → privilege scaffolding (CreateRole, policy attach) → compute provisioning (RunInstances, ECS service creation) → persistence (ModifyInstanceAttribute)

Machine learning models and graph-based analytics can score these sequences as a unit. That’s how you get earlier, more confident detection.

3) Automated containment that’s proportional (and safe)

Automation is only useful if it doesn’t take down production.

For cryptomining, the best automated actions are usually reversible and scoped:

  • Quarantine suspicious instances into a restricted security group
  • Temporarily disable suspicious access keys (with approval workflows)
  • Apply SCPs (service control policies) to block instance creation in specific regions
  • Kill or scale down suspicious ECS services
  • Block known mining pool destinations at egress controls

AI can help choose which action to take based on confidence, business context, and blast radius.

A practical defense plan for AWS credential theft and cryptojacking

If you want to reduce the odds of a cryptomining bill shock before Q4 closes, focus on two tracks: prevention and fast detection/response.

Prevention: make stolen credentials less useful

  • Prefer temporary credentials over long-lived access keys wherever possible
  • Enforce MFA for all human users
  • Reduce permissions with least privilege (especially around iam:*, ec2:RunInstances, ecs:*, lambda:*)
  • Restrict administrative actions to known networks or identity-aware controls
  • Tighten egress where feasible (mining needs to talk out)

One opinionated rule I like: treat “ability to create compute” as a privileged capability, not a default developer permission.

Detection: build guardrails around high-risk APIs

At minimum, monitor and alert on:

  • RunInstances bursts, especially first-seen instance types/regions
  • Repeated RunInstances with DryRun
  • CreateRole, AttachRolePolicy, CreateServiceLinkedRole
  • ModifyInstanceAttribute changes to termination behavior
  • ECS: new task definitions/services referencing unfamiliar images

Centralize logs so responders can see the story fast:

  • Use CloudTrail broadly and consistently
  • Aggregate into a central security account
  • Correlate with network and container telemetry

Response: rehearse the 30-minute “cloud cryptojacking drill”

Most teams have ransomware tabletop exercises. Few do cryptomining drills, even though they’re common and fast.

Your runbook should answer:

  1. How do we confirm mining behavior (CPU/GPU, process/container evidence, egress)?
  2. How do we identify the initial credential and all actions taken by it?
  3. How do we stop new instance creation quickly without breaking CI/CD?
  4. How do we clean up when termination protection is enabled?
  5. How do we rotate credentials and prevent re-entry (roles, policies, access keys)?

If you can’t do steps 2–4 quickly, you’re going to pay for it—literally.

Quick Q&A that comes up in real incidents

Why do cryptomining attacks keep targeting AWS?

Because AWS makes it easy to provision massive compute quickly, and stolen IAM credentials turn that convenience into attacker profit. Cloud scale works both ways.

Is cryptomining “just a cost issue”?

No. It’s often the first visible symptom of a broader compromise. The same access used for mining can be used for data exfiltration, lateral movement, or persistence via IAM changes.

Can we solve this with static rules alone?

Rules catch known patterns. They miss first-seen sequences and “normal-looking” abuse by valid credentials. AI helps by spotting contextual anomalies and suspicious multi-step chains.

What to do next if you want fewer surprises in your AWS bill

If you take one lesson from this campaign, make it this: valid AWS credentials plus automation can produce real impact in minutes. Cryptomining is simply the easiest way for attackers to monetize that access.

Start by tightening credential practices (temporary creds, MFA, least privilege), then invest in AI-driven monitoring that can flag abnormal IAM usage and suspicious compute provisioning as a connected story—not scattered alerts.

If you could detect “quota checks + repeated DryRun + new role creation + EC2/ECS burst” as a single incident within five minutes, how many cloud security problems would get easier overnight?