AI-Themed GitHub Repos Deliver PyStoreRAT Malware

AI in Cybersecurity••By 3L3C

AI-themed GitHub repos are being used to spread PyStoreRAT via tiny loader stubs and mshta.exe. Learn how to detect and stop this supply chain tactic.

PyStoreRATGitHub securitySupply chain securityMalware analysisThreat detectionOSINT tooling
Share:

Featured image for AI-Themed GitHub Repos Deliver PyStoreRAT Malware

AI-Themed GitHub Repos Deliver PyStoreRAT Malware

Most teams still treat GitHub stars and “trending” badges like trust signals. Attackers are betting on that.

In a recent campaign, threat actors planted tiny Python and JavaScript “loader” stubs inside GitHub repositories disguised as OSINT tools, GPT utilities, and developer helpers. The code looked harmless—sometimes it barely did anything—yet it quietly pulled a remote HTA file and executed it with mshta.exe, ultimately installing a new modular remote access trojan dubbed PyStoreRAT.

This is exactly the kind of modern supply chain attack where AI in cybersecurity earns its keep: you’re not defending against one file hash anymore. You’re defending against social proof, “maintenance commits,” living-off-the-land execution, and modular payloads that change shape.

What happened: PyStoreRAT in “OSINT” and “GPT wrapper” clothing

Answer first: The campaign weaponized GitHub repos that pretended to be useful AI/OSINT tooling, but actually served as a first-stage downloader for PyStoreRAT.

Researchers observed GitHub-hosted repos themed around:

  • OSINT collectors and “investigation” utilities
  • DeFi bots and crypto tooling
  • GPT wrappers / prompt utilities / “AI dev helpers”
  • Security-themed scripts that appeal to analysts

The repos often contained only a few lines of code—just enough to download and run a remote HTA payload via mshta.exe. That HTA then delivered PyStoreRAT.

Two details matter for defenders:

  1. The repos didn’t have to be sophisticated. Some were basically fake menus or placeholders.
  2. The trust exploit wasn’t technical first—it was social. Stars, forks, trending placement, and promotional posts on social media did a lot of the heavy lifting.

This is the uncomfortable truth: developers and security practitioners can be high-value targets precisely because they’re accustomed to running scripts and tooling.

Why mshta.exe keeps showing up

Answer first: mshta.exe is a built-in Windows binary that executes HTML Application (HTA) content, making it a convenient “live off the land” launcher.

Using legitimate system utilities to execute malicious content helps attackers:

  • Reduce friction (no custom dropper needed)
  • Blend into normal Windows process behavior
  • Delay or dodge EDR detections that rely on known malware signatures

If you’re defending endpoints, treat unexpected mshta.exe usage as suspicious by default—especially when it’s spawned by scripting runtimes (Python, Node) or cmd.exe.

Why this campaign worked: trust, timing, and “maintenance commits”

Answer first: The attackers built credibility first, then slipped in malware later.

A pattern worth calling out: repositories were published, promoted, and allowed to gain traction. Then the malicious payload was introduced in later commits disguised as routine “maintenance.” Reports indicate some repos landed on GitHub’s trending lists before malicious updates appeared.

This is a supply chain tactic your process needs to anticipate:

  • New repo + big star count isn’t trust. It can be bought, botted, or coordinated.
  • A repo that was clean last month can be hostile today. Point-in-time reviews don’t hold.

From an “AI in cybersecurity” lens, this is a textbook case for continuous behavioral analysis:

  • Watch for repo updates that introduce network fetch + execution
  • Compare commit intent vs. code behavior (ex: “fix README” commit that adds mshta.exe execution)
  • Flag abrupt dependency or execution-path changes

The reality? It’s simpler than many teams think: malware campaigns increasingly look like growth hacking.

The social layer: stars, forks, and promo videos

Attackers reportedly promoted these repos on platforms like YouTube and X and inflated popularity metrics. Your risk model should treat these as adversary-controlled inputs.

A practical stance I’ve found useful:

“Popularity is not provenance.”

Provenance is: who built it, how it’s released, how it’s signed, how it’s reviewed, and whether behavior matches claims.

What PyStoreRAT does after infection (and why defenders should care)

Answer first: PyStoreRAT is modular, multi-stage, and built to pull additional payloads—making early detection far more valuable than late-stage cleanup.

PyStoreRAT is described as a modular implant capable of executing multiple module types:

  • EXE and DLL payloads
  • PowerShell in memory
  • MSI installers
  • Python and JavaScript
  • Remote HTA content

It also deploys a follow-on information stealer (Rhadamanthys) and includes host profiling behaviors such as:

  • System profiling and privilege checks
  • Antivirus product discovery
  • Scanning for cryptocurrency wallet-related files (including Ledger Live, Trezor, Exodus, Atomic, Guarda, BitBox02)

Attackers aren’t just “getting a foothold.” They’re aiming for operational control and monetization (credentials, wallets, and further payload delivery).

Evasion logic: security product awareness

Answer first: The loader checks for installed security products and adjusts execution paths to reduce visibility.

The campaign reportedly gathered a list of installed antivirus products and looked for strings like:

  • “Falcon” (a likely reference to CrowdStrike Falcon)
  • “Reason” (a likely reference to Cybereason or ReasonLabs)

Then it varied how it launched mshta.exe (direct vs. via cmd.exe). That’s not Hollywood-level stealth, but it’s enough to slip past brittle rules.

This is where AI-powered threat detection can outperform static logic:

  • Identify abnormal parent/child process chains (Python → cmd.exe → mshta.exe)
  • Correlate network fetches with unusual script execution
  • Detect “low code, high consequence” loaders based on intent

Persistence: disguised scheduled task

Answer first: PyStoreRAT persists via a scheduled task disguised as an NVIDIA self-update.

This is a reminder to harden scheduled task creation and to audit tasks that:

  • Mimic vendor updaters but lack legitimate binaries/signatures
  • Appear shortly after developer tooling execution
  • Trigger network activity at odd intervals

The bigger lesson for the AI in Cybersecurity series: attackers are hijacking “AI trust”

Answer first: AI-themed tooling has become a lure category, and defenders need controls that assume developer curiosity will be exploited.

Over the last two years, “AI developer tools” have become the new shareware: wrappers, prompt helpers, automation scripts, local LLM launchers, and repo templates. That’s great for productivity—and perfect for attackers.

PyStoreRAT’s GitHub distribution highlights a shift:

  • Social engineering is now embedded in developer workflows
  • “Open-source supply chain security” isn’t limited to dependencies in package.json or requirements.txt
  • Malware can be one curl/Invoke-WebRequest away, hidden behind a friendly README

If your organization allows engineers, analysts, or IT to run tools from public repos (most do), you need a strategy that combines:

  • Policy (what’s allowed)
  • Guardrails (how it’s evaluated)
  • Detection (how it’s monitored)

Practical defenses: what to change this week

Answer first: Reduce the chance of execution, then detect suspicious behavior fast—especially script-driven mshta.exe chains.

Here’s a pragmatic checklist that works even if you can’t lock everything down.

1) Put guardrails on running code from public repos

If “clone and run” is normal in your environment, add friction in the right places:

  • Require execution in a sandboxed dev VM (not a workstation with VPN tokens and browser sessions)
  • Block direct execution of downloaded scripts from user profiles unless signed/approved
  • Use allowlists for internal tooling and approved open-source repos

Stance: engineers can still experiment—just not on their daily driver machine.

2) Watch for loader behaviors, not malware names

PyStoreRAT may change. The behavior pattern won’t.

Prioritize detections for:

  • Python/Node spawning cmd.exe or powershell.exe unexpectedly
  • Any process spawning mshta.exe without a clear business justification
  • Script processes making outbound calls and then executing new content
  • Scheduled task creation shortly after a developer tool run

3) Use AI-assisted triage to keep up with volume

Security teams don’t lose to PyStoreRAT because they’re careless. They lose because there are too many alerts.

A solid AI-assisted security operations workflow should:

  • Cluster similar events (same process chain, same repo, similar network patterns)
  • Summarize what changed in a repo update and why it’s risky
  • Prioritize incidents with high business impact (developer endpoints, privileged users)

This is where machine learning for anomaly detection and modern SOC copilots are genuinely useful—when they compress noise into decisions.

4) Treat developer endpoints as high-value assets

Developers and security engineers often have:

  • Access tokens (cloud, CI/CD, source control)
  • Elevated privileges
  • SSH keys and signing keys
  • Browser sessions tied to production systems

Hardening actions that pay off fast:

  • Enforce phishing-resistant MFA for code hosting and cloud consoles
  • Rotate tokens aggressively and monitor token creation
  • Run EDR with strong scripting telemetry and command-line capture

“People also ask”: quick answers teams need

Is GitHub safe to use?

Answer: GitHub is safe as a platform, but any public repo can be malicious. Trust has to be earned through provenance and verification, not stars.

Why target OSINT and GPT utilities?

Answer: Users of these tools are likely to run code quickly, often on machines with privileged access. It’s a high-conversion lure.

What’s the fastest detection win?

Answer: Alert on suspicious mshta.exe execution and unusual script-to-shell process chains, then correlate with fresh repo downloads.

What this means heading into 2026

This campaign isn’t a one-off. It’s a model: build a repo that looks helpful, juice the metrics, wait for adoption, then ship a “maintenance” commit that pulls a modular payload.

If you’re following our AI in Cybersecurity series, this is the thread that keeps coming up: defenders need systems that understand behavior and intent, not just known-bad indicators. AI-powered threat detection, automated code-risk analysis, and anomaly-driven endpoint monitoring are becoming table stakes—because attackers are already automating the other side.

If you want a practical next step, start by mapping where your org runs public code (dev machines, jump boxes, SOC laptops), then decide where AI-assisted monitoring and automated triage will have the biggest impact. What’s your weakest “clone and run” path right now?

🇺🇸 AI-Themed GitHub Repos Deliver PyStoreRAT Malware - United States | 3L3C