PyStoreRAT on GitHub: AI Defense for Dev Teams

AI in Cybersecurity••By 3L3C

PyStoreRAT hid in fake GPT and OSINT GitHub repos. Learn how AI-driven detection and practical controls stop script-based supply chain malware.

PyStoreRATGitHub supply chainmalware analysisAI threat detectiondeveloper securityOSINT
Share:

Featured image for PyStoreRAT on GitHub: AI Defense for Dev Teams

PyStoreRAT on GitHub: AI Defense for Dev Teams

Most teams still treat GitHub “stars” as social proof. Attackers are counting on that.

A December 2025 malware campaign shows how easy it is to weaponize developer trust: fake Python repos posing as OSINT tools, “GPT utilities,” DeFi bots, and security helpers delivered a new modular remote access trojan (RAT) dubbed PyStoreRAT. The trick wasn’t sophisticated exploitation. It was branding, timing, and distribution—then a tiny loader that quietly kicked off a multi-stage infection chain.

This matters for the AI in Cybersecurity conversation because it’s not just “malware on GitHub.” It’s malware wearing AI tooling as a costume. If your org builds with open source, runs internal scripts, or evaluates AI wrappers and automation tools, you’re in the target zone.

What happened: fake “GPT utilities” that weren’t utilities

PyStoreRAT spread through GitHub-hosted repositories that looked like useful projects—OSINT kits, GPT wrappers, dev utilities—yet contained only minimal code. That code’s real job was to download a remote HTA file and execute it using mshta.exe.

A detail that should make every security leader pause: attackers reportedly waited. Repos accumulated attention and sometimes hit trending lists, then malicious payloads were introduced later through “maintenance” commits. That’s the supply chain playbook for 2025: build trust first, cash it in later.

Why GitHub is such an effective delivery channel

GitHub succeeds because it compresses the cost of trust:

  • A repo looks “real” if it has stars, forks, issues, and a few contributors
  • People copy/paste installation steps quickly (especially for OSINT and automation)
  • Many orgs still allow outbound network access during development and testing

Attackers amplified that trust by artificially inflating stars and forks, echoing earlier “ghost stargazer” patterns seen this year across trojanized repositories.

How the PyStoreRAT infection chain works (and why it slips past controls)

PyStoreRAT isn’t a single binary you scan once. It’s a modular, multi-stage implant designed to execute different payload formats on demand—EXE, DLL, PowerShell, MSI, Python, JavaScript, and HTA.

Here’s the operational flow in plain language:

  1. User runs the repo code (Python/JS “loader stub”) believing it’s a tool
  2. Loader pulls a remote HTA and executes it via mshta.exe
  3. The chain delivers PyStoreRAT and profiles the host
  4. A scheduled task (disguised as an NVIDIA self-update) establishes persistence
  5. The RAT phones home for commands and can fetch additional modules

The “mshta.exe problem” is back—because it still works

mshta.exe is a legitimate Windows binary used to run HTML Applications. It’s also a long-standing attacker favorite because it can execute script content with fewer user prompts than you’d expect.

Security teams often focus on high-signal malware indicators (unsigned binaries, suspicious drivers, obvious persistence). PyStoreRAT instead starts with tiny scripts and living-off-the-land execution. That delays detection until later stages—exactly when damage is easier.

Targeting goes where the money is: crypto wallets and follow-on stealers

Once running, PyStoreRAT can scan for cryptocurrency wallet-related files. The campaign also used an information stealer (Rhadamanthys) as a follow-on payload.

Even if your company doesn’t “do crypto,” you still have risk:

  • Developers and analysts may have wallets on workstations
  • Browser sessions, saved credentials, and tokens are monetizable
  • Compromised endpoints become launchpads into CI/CD and cloud consoles

Why this is an AI-themed threat, not just a malware story

Attackers didn’t need AI to write the malware. They needed AI branding to get the malware executed.

“GPT wrapper” repositories attract exactly the people with:

  • Broad permissions
  • Access to sensitive environments
  • A habit of running scripts quickly

That’s why this campaign fits the AI in Cybersecurity series: AI has expanded the lure surface area. The more teams adopt AI utilities, the easier it becomes to disguise malware as “just another helper script.”

Here’s the stance I’ll take: if your org encourages experimentation with AI developer tooling but doesn’t apply software supply chain controls to those tools, you’re effectively paying attackers to target you.

3 ways AI can detect GitHub-based supply chain malware earlier

Rule-based detection still matters, but PyStoreRAT shows why defenders need AI-driven anomaly detection to catch the “looks normal until it doesn’t” pattern.

1) AI can score repository trust beyond stars and forks

A useful internal control is a repo risk score that blends code signals, contributor behavior, and timeline anomalies.

AI models (or even simpler ML classifiers) can flag suspicious patterns such as:

  • Repos that gain stars unusually fast relative to commit history
  • Sudden introduction of obfuscated loader code after a “quiet” period
  • High ratio of README content to real functionality
  • Dependency install instructions that invoke shell commands unnecessarily

This matters because the deception is social and behavioral. Behavioral detection is where AI shines.

2) AI can spot “loader stubs” by intent, not exact signatures

Loader stubs are small. They don’t always match known signatures. But they share intent:

  • Fetch remote content
  • Execute it via script hosts (mshta.exe, wscript.exe, powershell.exe)
  • Avoid visibility checks

AI-assisted code analysis can classify snippets by behavior (download + execute) even when variable names, URLs, and formatting change. For security teams, this is a practical use of semantic code scanning: flag what the code does, not what it looks like.

3) AI can correlate endpoint telemetry into a single “story”

Most organizations already collect plenty of signals:

  • Process creation events (cmd.exe → mshta.exe)
  • Scheduled task creation
  • Outbound connections to new domains
  • In-memory script execution patterns

The hard part is stitching them into a narrative quickly.

AI copilots for SOC analysts can reduce time-to-triage by automatically producing:

  • A timeline (what happened first, second, third)
  • A confidence score for malicious behavior
  • Suggested containment steps mapped to what was observed

That’s how you stop modular malware: faster understanding beats perfect signatures.

Practical defenses: what to change this week

If you’re responsible for security, engineering productivity, or SOC operations, you can reduce exposure without slowing teams to a crawl.

Lock down script-host abuse without breaking workflows

Start with a policy stance: mshta.exe should almost never run on managed endpoints.

Actions that are widely effective:

  • Alert on or block mshta.exe execution (especially parented by cmd.exe)
  • Alert on scheduled tasks created from user contexts and disguised as vendor updaters
  • Monitor for unusual use of rundll32.exe with non-standard DLL paths

Add “safe evaluation” lanes for GitHub tools

Most infections start on a workstation where someone is testing a tool.

Set up a safer pattern:

  1. Evaluate unknown repos only in isolated VMs or disposable dev containers
  2. Restrict outbound network access in that sandbox by default
  3. Require human review for any code that downloads remote payloads or executes script hosts

This isn’t bureaucracy. It’s the seatbelt.

Treat trending repos like untrusted email attachments

Teams often behave more cautiously with email than with code. That’s backward.

A workable checklist for engineers before running a repo:

  • Does it provide real functionality, or just a menu/UI?
  • Are there recent commits that introduce remote downloads or encoded blobs?
  • Do install steps include curl | bash, powershell -enc, or anything that spawns system script engines?
  • Is the author identity and history consistent (or newly created / dormant)?

Put AI on the defender’s side of developer tooling

If your organization already uses AI in engineering, add the security layer too:

  • AI-assisted PR review rules tuned for “download-and-execute” patterns
  • Anomaly detection for repo usage and script execution across endpoints
  • Automated enrichment in the SOC when dev endpoints execute rare binaries

The goal isn’t to ban experimentation. It’s to make experimentation survivable.

What about SetcodeRat and regional targeting—why it matters

The same news cycle included another RAT (SetcodeRat) distributed through malvertising lures and gated by system language/region checks (Chinese-speaking locales).

The common thread is operational discipline: modern malware campaigns increasingly include environment checks to reduce exposure and analysis. PyStoreRAT also includes security-product awareness (checking installed AV and searching for strings associated with specific endpoint tools).

For defenders, that means two things:

  • Detection should prioritize behavioral sequences over single indicators
  • AI-driven correlation becomes more valuable as malware becomes more selective

Next steps: a better standard for “AI tools” on your endpoints

PyStoreRAT is a reminder that “AI utility” has become a high-performing lure category. Attackers don’t need to break your firewall when they can convince a developer to run a script that opens the door for them.

If you’re building an AI in Cybersecurity roadmap for 2026, put this near the top: use AI to protect the systems that build and run AI tooling. That means anomaly detection across repos, code scanning that understands intent, and SOC workflows that turn scattered telemetry into fast decisions.

If your team pulled a random GitHub “GPT helper” this month, would you be able to prove it didn’t execute a remote HTA via mshta.exe—and would you know within minutes, not days?