Hard-coded keys in Gladinet enabled forged access tickets and RCE attempts. Learn what to patch now and how AI anomaly detection can catch similar attacks sooner.
AI Detection Lessons From Gladinet Hard-Coded Keys
Nine organizations were already hit before many teams even knew they had a problem: attackers exploited hard-coded cryptographic keys in Gladinet CentreStack and Triofox to gain unauthorized access and push toward remote code execution (RCE). CISA has since put the issue on its Known Exploited Vulnerabilities list, with a federal remediation deadline of January 5, 2026.
Most companies get this wrong: they treat key management flaws as âcrypto hygieneâ and patch them when the next maintenance window opens. The Gladinet incident shows why that mindset fails. When keys are hard-coded, attackers donât need to break encryptionâthey just reuse the vendorâs own secrets.
This post is part of our AI in Cybersecurity series, and Iâm going to take a clear stance: this is exactly the kind of vulnerability and exploitation pattern AI-augmented detection should surface faster than humans can. Not because AI magically finds all bugsâbut because itâs very good at noticing the behavioral fingerprints that hard-coded keys and forged tokens leave behind.
What happened in the Gladinet attacks (and why itâs so dangerous)
Answer first: Attackers abused hard-coded AES key material to forge or decrypt access tickets, then used that access to retrieve sensitive config data and attempt ViewState deserialization RCE.
Gladinet CentreStack and Triofox use access tickets to authorize file system actions as a given user. The problem: a function involved in generating the security key material (GenerateSecKey() in GladCtrl64.dll) returned the same 100-byte strings, meaning the derived crypto keys were effectively static.
Static keys create two ugly outcomes at once:
- Decryption at scale: If an attacker can observe or capture a ticket, they can decrypt it and learn whatâs inside.
- Forgery at will: Worse, if the key never changes, an attacker can craft a ticket of their own choosing and encrypt it so the server accepts it.
In this campaign, attackers used crafted requests against the /storage/filesvr.dn endpoint. Huntress observed tickets with blank Username and Password fields, causing the application to fall back to the IIS Application Pool Identity. Thatâs a classic âoops, you just authenticated as the service accountâ failure mode.
The detail that should make defenders sweat: the attacker-set timestamp field was 9999, effectively creating a ticket that never expires. That turns a one-time exploit into a reusable skeleton key.
How this becomes RCE: config theft â machine keys â ViewState
Answer first: The path to RCE ran through web.config, because it can reveal the machine key used by ASP.NET for ViewState validation and decryption.
Once an attacker can download web.config, theyâre often one step away from abusing ASP.NET ViewState deserialization. In Gladinetâs case, attackers attempted a ViewState deserialization attack after obtaining keys, and then attempted to retrieve output.
Even when that âfinal stepâ fails (as it reportedly did in at least one observed attempt), the compromise is still severe:
- Sensitive configuration exposure
- Credential material and secrets risk
- Persistent unauthorized access if forged tickets continue to work
- Potential lateral movement using the service identity context
Why hard-coded keys keep shipping (and why patching alone isnât enough)
Answer first: Hard-coded keys persist because theyâre convenient during development, hard to remove late in release cycles, and rarely tested the way auth bypasses are tested.
Hard-coded crypto often starts as a shortcut:
- âWeâll rotate it later.â
- âItâs only for internal tickets.â
- âItâs behind the firewall.â
Those assumptions collapse in real environments where:
- Apps end up publicly exposed (sometimes unintentionally).
- Tickets and URLs are logged, forwarded, cached, or copied into chat tools.
- Attackers chain issues together (as seen here, chaining with earlier Gladinet flaws).
Gladinet is also a good reminder that vulnerability management isnât only about âcritical CVEs.â This issue initially had no CVE, yet it was actively exploited and later assigned CVE-2025-14611 with a 7.1 severity score. A lot of teams would have triaged a 7.1 below âdrop everything.â Attackers clearly disagreed.
Hereâs what works better than severity-score worship: prioritize anything that enables auth bypass, secret extraction, token forgery, or RCE chainingâespecially on internet-facing services.
Where AI helps: detecting hard-coded key abuse by behavior, not signatures
Answer first: AI-augmented threat detection shines here because exploitation leaves anomalies in requests, identities, and token lifetimes that are easier to spot with models than with static rules.
Teams often ask, âCan AI find hard-coded keys in code?â Sometimes, yesâvia secure code scanning and pattern detection. But the bigger win is that AI can detect the operational consequences of bad cryptography even when you didnât know the flaw existed.
1) AI for anomaly detection in token and ticket lifetimes
A timestamp set to 9999 is absurd on its face, but many logs donât make it obvious, and rule sets rarely cover every productâs token format.
A practical approach:
- Model typical ticket TTLs and timestamp distributions per app
- Alert when token lifetimes exceed policy baselines (hours/days) by large margins
- Correlate ânever-expiringâ indicators with unusual download activity (config files, binaries)
This is the kind of detection that doesnât require knowing the CVE ahead of time.
2) AI for identity-context drift (blank credentials â service identity)
When Username/Password fields are blank and the app âfalls backâ to an IIS Application Pool Identity, you get identity-context drift: actions are executed under an unexpected principal.
AI-based baselining can flag:
- File access performed under service identities that typically donât browse storage endpoints
- New combinations of endpoint + identity + file path
- Spikes in privileged reads (like configuration retrieval) coming from non-admin workflows
If youâve ever tried to hand-write rules for every âservice account doing weird thingsâ scenario, you already know why this matters.
3) AI for URL pattern and parameter outlier detection
The exploit traffic included crafted requests to /storage/filesvr.dn with a particular encrypted string defenders were advised to hunt for (the encrypted representation of the web.config path).
Signature hunting is useful, but brittle. AI-assisted clustering helps you find:
- Rare parameter formats and unusually long token blobs
- Sudden appearance of high-entropy values in query strings
- Repeated replay of identical crafted URLs (consistent with a âreusable ticketâ)
That last point is key: a forged ticket that never expires often results in replay behavior that stands out statistically.
4) AI to reduce time-to-triage during patch windows
Mid-December is peak âchange-freezeâ season in many organizations. Attackers love it. AI-assisted SOC workflows can shorten response cycles by:
- Auto-enriching suspicious IPs and correlating across WAF, IIS, EDR, and app logs
- Summarizing the attack chain (âconfig read â key extraction â ViewState attemptâ)
- Prioritizing hosts likely to expose secrets (internet-facing, high request volume, older versions)
The goal isnât to replace analysts. Itâs to stop losing hours to manual correlation while an attacker is actively iterating.
A defenderâs playbook for Gladinet-style crypto failures
Answer first: Treat this as two problems: (1) immediate containment and hardening, and (2) systemic prevention so hard-coded keys and token forgery donât surprise you again.
Immediate actions (next 24â48 hours)
- Patch to the fixed version of CentreStack/Triofox (Gladinet released 16.12.10420.56791 on December 8, 2025).
- Hunt for exploit traces in web and application logs:
- Requests to
/storage/filesvr.dn - Repeated download attempts of sensitive paths (especially config files)
- Suspicious high-entropy parameters and replayed URLs
- Requests to
- Check for the attacker infrastructure noted in reporting (for example, a specific IP was associated with observed activity).
- If compromise is suspected, rotate the ASP.NET machine key and restart IIS across nodes.
Practical stance: if you found evidence of config access, assume secrets are burned. Rotate keys, then investigate. Donât investigate first and rotate later.
Hardening moves that pay off all year
Reduce blast radius of configuration exposure
- Lock down read access to
web.configand other sensitive configs at the filesystem and IIS levels. - Ensure the application pool identity has the minimum privileges required.
- Centralize secrets in a vault where possible, instead of static config values.
Add âcrypto misuseâ checks into SDLC and procurement
If you build software:
- Add SAST rules for hard-coded keys, IVs, and static salts.
- Block builds when cryptographic material is embedded in binaries or config templates.
If you buy software:
- Ask vendors directly how keys are generated, stored, and rotated.
- Require evidence of secure key management and rotation mechanisms.
Hard-coded keys are not a âbug.â Theyâre a product decision. Treat them that way.
Instrument the app so AI can see what matters
AI detection is only as good as your signals. For internet-facing enterprise apps, prioritize:
- Structured logs for auth decisions (who, what identity, why allowed)
- Request IDs that flow across reverse proxy/WAF â app â storage actions
- Normalized fields for token lifetime, creation time, and expiry
If the only thing you log is a 200 status code, youâre blind.
âPeople also askâ (and the answers you actually need)
Is a hard-coded key always exploitable?
Answer: Not always, but itâs always a serious design flaw. If the key protects anything an attacker can influence (tickets, cookies, tokens, encrypted URLs), exploitation is often straightforward.
Why did attackers target web.config specifically?
Answer: Because web.config can contain or expose the ASP.NET machine key, which is central to ViewState integrity and can enable deserialization-based RCE.
Can AI really detect this faster than rules?
Answer: Yes, when the exploit produces statistical outliersâlike never-expiring tickets, identity fallback behavior, unusual endpoint access patterns, and replayed crafted URLs. Rules can catch known strings. AI can catch the pattern even when the string changes.
Where this fits in the AI in Cybersecurity story
Hard-coded keys are a reminder that âAI securityâ isnât only about fighting AI-powered attackers. Itâs about using AI to catch the kinds of failures that slip past process: insecure cryptography, brittle auth flows, and token formats that are easy to forge.
If youâre responsible for a SOC, this is the week to pressure-test your detection: could you spot a never-expiring authorization token being replayed against a storage endpoint? If not, youâre relying on luck and patch speed.
The reality? Itâs simpler than it sounds: patch fast, rotate secrets when exposure is plausible, and use AI-driven anomaly detection to catch the next âstatic keyâ incident before it becomes your incident. What signal would you want your team to see firstâthe CVE announcement, or the first forged ticket hitting your logs?