GitHub Actions supply chain attacks rose in 2025. Learn practical hardening steps and how AI-driven detection can catch pipeline abuse in real time.
Stop GitHub Actions Supply Chain Attacks With AI
A lot of teams treat CI/CD like plumbing: once it works, nobody wants to touch it. Attackers love that.
In 2025, supply chain attacks targeting GitHub Actions climbed because they’re a high-trust, high-privilege choke point. When a workflow runs, it often has the exact things an attacker wants: GITHUB_TOKEN, cloud credentials, package registry tokens, signing keys, and permission to publish, deploy, or modify code.
This post is part of our AI in Cybersecurity series, and it’s a perfect example of where AI actually earns its keep. Humans can’t review every YAML change, every third‑party Action update, and every suspicious outbound call during builds. AI-driven threat detection can.
Why GitHub Actions became a top supply chain target in 2025
GitHub Actions is attractive to attackers for one simple reason: compromise one workflow step, inherit the pipeline’s trust.
At Black Hat Europe 2025, researchers highlighted a pattern defenders keep learning the hard way: many compromises don’t require “breaking GitHub.” They exploit misconfiguration, overly broad permissions, and unsafe third‑party components inside the workflow.
The common attack path: “steal secrets, then pivot”
Most GitHub Actions supply chain incidents follow a repeatable chain:
- Find a workflow that can be influenced (for example, unsafe triggers, untrusted PR execution, or a compromised dependency/Action).
- Get code to execute in the runner (by poisoning a third‑party Action, abusing workflow logic, or exploiting permissive event handling).
- Exfiltrate secrets exposed to that job (PATs, npm tokens, cloud keys, SSH keys, signing certs).
- Use those secrets to pivot: publish a malicious package, alter releases, access internal repos, or reach production.
That’s why these attacks scale so fast: the pipeline is the distribution engine.
A real 2025 example: tj-actions/changed-files (CVE-2025-30066)
One of the most cited 2025 cases involved a compromise of a popular third‑party GitHub Action (tj-actions/changed-files, tracked as CVE-2025-30066). Public reporting indicated it enabled access to sensitive secrets such as GitHub tokens and registry tokens in affected pipelines.
Here’s the part teams miss: the attacker’s “target” may be one big org, but the blast radius often includes everyone who reused the same Action. That’s supply chain risk in its purest form.
A pipeline that builds fast but trusts blindly is a pipeline designed for attackers.
The shared responsibility trap: what GitHub won’t fix for you
GitHub provides security controls, but GitHub can’t stop you from making dangerous choices in your workflows.
The Black Hat Europe message was blunt: open source and CI/CD security is a shared responsibility model, and too many organizations behave like it’s GitHub’s job alone.
Three ways teams accidentally widen the attack surface
1) Treating third-party Actions as “just scripts.” Actions can execute arbitrary code. Pulling a third‑party Action is closer to installing a build agent plugin than importing a library.
2) Over-permissioned tokens by default. If your workflow can write to the repo, create releases, access environments, or mint OIDC tokens without guardrails, you’ve turned a build step into an admin.
3) Letting untrusted events touch secrets.
Misusing events like pull_request_target, or injecting PR-controlled inputs into shell steps, is a classic way to hand attackers execution.
This is where AI adds value: it can watch for behavior, not just known bad signatures.
Where AI helps most: detecting pipeline abuse in real time
Static controls (pinning, reviews, least privilege) are table stakes. They reduce probability. But you also need detection that reduces dwell time.
AI works in CI/CD when it’s used for anomaly detection, sequence analysis, and policy enforcement at scale.
1) AI for workflow anomaly detection
The strongest signal in GitHub Actions incidents is often behavioral:
- A build job that suddenly makes outbound calls to unfamiliar domains
- A new step that base64-encodes environment variables
- A dependency install that now pulls from an unexpected registry
- A release workflow that runs at odd times or from unexpected branches
A practical approach is to train detection on your “known good” pipeline behavior:
- Expected network destinations during build
- Normal file paths touched
- Typical commands executed per repo
- Normal identity posture (which repo, which environment, which role)
Then alert on deviations that matter.
Why AI beats rules here: attackers mutate quickly. Rules catch last month’s trick. Models catch “this doesn’t look like us.”
2) LLMs as reviewers for risky YAML and PR changes
Most teams don’t have time for deep CI/CD reviews. An LLM used as a review assistant can flag risk patterns immediately when a PR touches .github/workflows/*.yml:
- Secrets exposed to PR contexts
- Use of
pull_request_targetwith checkout of untrusted code - Shell injection via unsanitized inputs
- Unpinned Actions (e.g.,
uses: org/action@main) - Added permissions like
contents: writewithout justification
This is not “AI replacing AppSec.” It’s AI making sure AppSec sees the 10 changes that actually matter.
3) AI to protect secrets and identities (not just code)
A stolen token is often the real incident, not the initial workflow edit.
AI-driven identity analytics can detect:
- PAT usage from new geographies or new user agents
- Sudden spikes in repo cloning or API calls
- Abnormal package publish patterns (new package name similarity, unusual version jumps)
- OIDC token minting that doesn’t match normal deployment cadence
If you’re already running UEBA or identity threat detection, extend it to developer identities and CI identities. Pipelines are identities.
A hardened GitHub Actions checklist (with AI-friendly controls)
Here’s what I recommend when teams ask “what do we do on Monday?” Start with controls that are both preventive and observable.
Lock down what can run
- Pin Actions to immutable SHAs, not tags.
- Reduce third‑party Actions to the minimum; prefer well-maintained, widely scrutinized projects.
- Use required reviews for workflow changes (
CODEOWNERSfor.github/workflows/).
AI assist: have an LLM bot auto-comment on workflow PRs with a risk score and specific findings (“new permissions requested”, “new network egress likely”).
Minimize permissions and secret exposure
- Set default
GITHUB_TOKENpermissions to read-only, then elevate only per job. - Don’t pass long-lived secrets unless necessary; prefer OIDC short-lived credentials.
- Separate build and release: builds shouldn’t have publish or deploy permissions.
AI assist: anomaly detection on permissions drift (a repo that suddenly starts requesting contents: write is a strong signal).
Reduce “bystander effect” blast radius
The bystander effect is real: you can get hit because someone else’s Action got compromised.
- Create an internal allowlist of approved Actions and reusable workflows.
- Mirror critical Actions internally (or vendor them) when feasible.
- Continuously inventory: “Which repos depend on which Actions?”
AI assist: graph analysis to map dependencies and calculate “blast radius score” per Action.
Monitor runtime behavior inside runners
If you can only do one detection upgrade, do this.
- Capture process execution (
bash,curl,python,node) in runners - Monitor outbound connections and DNS lookups
- Alert on secret-like strings leaving the environment (token-shaped patterns)
AI assist: sequence models that learn normal build step ordering and flag unusual chains (example: checkout → npm install → curl to paste site → echo $GITHUB_TOKEN).
“People also ask”: quick answers teams want
How do GitHub Actions supply chain attacks usually start?
Most start with either a compromised third‑party Action, a poisoned dependency used in the pipeline, or a workflow misconfiguration that lets untrusted code run with secrets.
What’s the fastest way to reduce risk without slowing development?
Pin Actions by SHA, make workflow changes require review, and set GITHUB_TOKEN to read-only by default. Those three moves eliminate a lot of easy wins for attackers.
Can AI prevent GitHub Actions secrets exfiltration?
AI won’t prevent every attempt, but it can detect exfiltration behavior early (unexpected outbound calls, encoding patterns, abnormal token usage) and trigger containment before secrets are reused.
What to do next if you’re responsible for CI/CD security
If your pipeline can publish packages, cut releases, or deploy to production, treat it like a production system—because attackers already do.
For most teams, the practical path is combining:
- Preventive controls (pinning, least privilege, review gates)
- AI-powered detection (workflow anomaly detection, identity analytics, runtime monitoring)
- Response muscle (token rotation, secret scanning, rapid workflow rollback)
If you’re building an AI in Cybersecurity roadmap for 2026, CI/CD is one of the highest-ROI places to start: lots of telemetry, repeatable patterns, and a clear definition of “normal.”
What would you rather find first: an AI alert that a build started exfiltrating secrets, or a customer email telling you your release artifacts are compromised?