AI Detection Lessons From Gladinet Hard-Coded Keys

AI in Cybersecurity••By 3L3C

Hard-coded keys in Gladinet enabled forged access tickets and RCE attempts. Learn what to patch now and how AI anomaly detection can catch similar attacks sooner.

CVE-2025-14611GladinetCentreStackTriofoxKey ManagementAI Security MonitoringIncident Response
Share:

AI Detection Lessons From Gladinet Hard-Coded Keys

Nine organizations were already hit before many teams even knew they had a problem: attackers exploited hard-coded cryptographic keys in Gladinet CentreStack and Triofox to gain unauthorized access and push toward remote code execution (RCE). CISA has since put the issue on its Known Exploited Vulnerabilities list, with a federal remediation deadline of January 5, 2026.

Most companies get this wrong: they treat key management flaws as “crypto hygiene” and patch them when the next maintenance window opens. The Gladinet incident shows why that mindset fails. When keys are hard-coded, attackers don’t need to break encryption—they just reuse the vendor’s own secrets.

This post is part of our AI in Cybersecurity series, and I’m going to take a clear stance: this is exactly the kind of vulnerability and exploitation pattern AI-augmented detection should surface faster than humans can. Not because AI magically finds all bugs—but because it’s very good at noticing the behavioral fingerprints that hard-coded keys and forged tokens leave behind.

What happened in the Gladinet attacks (and why it’s so dangerous)

Answer first: Attackers abused hard-coded AES key material to forge or decrypt access tickets, then used that access to retrieve sensitive config data and attempt ViewState deserialization RCE.

Gladinet CentreStack and Triofox use access tickets to authorize file system actions as a given user. The problem: a function involved in generating the security key material (GenerateSecKey() in GladCtrl64.dll) returned the same 100-byte strings, meaning the derived crypto keys were effectively static.

Static keys create two ugly outcomes at once:

  1. Decryption at scale: If an attacker can observe or capture a ticket, they can decrypt it and learn what’s inside.
  2. Forgery at will: Worse, if the key never changes, an attacker can craft a ticket of their own choosing and encrypt it so the server accepts it.

In this campaign, attackers used crafted requests against the /storage/filesvr.dn endpoint. Huntress observed tickets with blank Username and Password fields, causing the application to fall back to the IIS Application Pool Identity. That’s a classic “oops, you just authenticated as the service account” failure mode.

The detail that should make defenders sweat: the attacker-set timestamp field was 9999, effectively creating a ticket that never expires. That turns a one-time exploit into a reusable skeleton key.

How this becomes RCE: config theft → machine keys → ViewState

Answer first: The path to RCE ran through web.config, because it can reveal the machine key used by ASP.NET for ViewState validation and decryption.

Once an attacker can download web.config, they’re often one step away from abusing ASP.NET ViewState deserialization. In Gladinet’s case, attackers attempted a ViewState deserialization attack after obtaining keys, and then attempted to retrieve output.

Even when that “final step” fails (as it reportedly did in at least one observed attempt), the compromise is still severe:

  • Sensitive configuration exposure
  • Credential material and secrets risk
  • Persistent unauthorized access if forged tickets continue to work
  • Potential lateral movement using the service identity context

Why hard-coded keys keep shipping (and why patching alone isn’t enough)

Answer first: Hard-coded keys persist because they’re convenient during development, hard to remove late in release cycles, and rarely tested the way auth bypasses are tested.

Hard-coded crypto often starts as a shortcut:

  • “We’ll rotate it later.”
  • “It’s only for internal tickets.”
  • “It’s behind the firewall.”

Those assumptions collapse in real environments where:

  • Apps end up publicly exposed (sometimes unintentionally).
  • Tickets and URLs are logged, forwarded, cached, or copied into chat tools.
  • Attackers chain issues together (as seen here, chaining with earlier Gladinet flaws).

Gladinet is also a good reminder that vulnerability management isn’t only about “critical CVEs.” This issue initially had no CVE, yet it was actively exploited and later assigned CVE-2025-14611 with a 7.1 severity score. A lot of teams would have triaged a 7.1 below “drop everything.” Attackers clearly disagreed.

Here’s what works better than severity-score worship: prioritize anything that enables auth bypass, secret extraction, token forgery, or RCE chaining—especially on internet-facing services.

Where AI helps: detecting hard-coded key abuse by behavior, not signatures

Answer first: AI-augmented threat detection shines here because exploitation leaves anomalies in requests, identities, and token lifetimes that are easier to spot with models than with static rules.

Teams often ask, “Can AI find hard-coded keys in code?” Sometimes, yes—via secure code scanning and pattern detection. But the bigger win is that AI can detect the operational consequences of bad cryptography even when you didn’t know the flaw existed.

1) AI for anomaly detection in token and ticket lifetimes

A timestamp set to 9999 is absurd on its face, but many logs don’t make it obvious, and rule sets rarely cover every product’s token format.

A practical approach:

  • Model typical ticket TTLs and timestamp distributions per app
  • Alert when token lifetimes exceed policy baselines (hours/days) by large margins
  • Correlate “never-expiring” indicators with unusual download activity (config files, binaries)

This is the kind of detection that doesn’t require knowing the CVE ahead of time.

2) AI for identity-context drift (blank credentials → service identity)

When Username/Password fields are blank and the app “falls back” to an IIS Application Pool Identity, you get identity-context drift: actions are executed under an unexpected principal.

AI-based baselining can flag:

  • File access performed under service identities that typically don’t browse storage endpoints
  • New combinations of endpoint + identity + file path
  • Spikes in privileged reads (like configuration retrieval) coming from non-admin workflows

If you’ve ever tried to hand-write rules for every “service account doing weird things” scenario, you already know why this matters.

3) AI for URL pattern and parameter outlier detection

The exploit traffic included crafted requests to /storage/filesvr.dn with a particular encrypted string defenders were advised to hunt for (the encrypted representation of the web.config path).

Signature hunting is useful, but brittle. AI-assisted clustering helps you find:

  • Rare parameter formats and unusually long token blobs
  • Sudden appearance of high-entropy values in query strings
  • Repeated replay of identical crafted URLs (consistent with a “reusable ticket”)

That last point is key: a forged ticket that never expires often results in replay behavior that stands out statistically.

4) AI to reduce time-to-triage during patch windows

Mid-December is peak “change-freeze” season in many organizations. Attackers love it. AI-assisted SOC workflows can shorten response cycles by:

  • Auto-enriching suspicious IPs and correlating across WAF, IIS, EDR, and app logs
  • Summarizing the attack chain (“config read → key extraction → ViewState attempt”)
  • Prioritizing hosts likely to expose secrets (internet-facing, high request volume, older versions)

The goal isn’t to replace analysts. It’s to stop losing hours to manual correlation while an attacker is actively iterating.

A defender’s playbook for Gladinet-style crypto failures

Answer first: Treat this as two problems: (1) immediate containment and hardening, and (2) systemic prevention so hard-coded keys and token forgery don’t surprise you again.

Immediate actions (next 24–48 hours)

  1. Patch to the fixed version of CentreStack/Triofox (Gladinet released 16.12.10420.56791 on December 8, 2025).
  2. Hunt for exploit traces in web and application logs:
    • Requests to /storage/filesvr.dn
    • Repeated download attempts of sensitive paths (especially config files)
    • Suspicious high-entropy parameters and replayed URLs
  3. Check for the attacker infrastructure noted in reporting (for example, a specific IP was associated with observed activity).
  4. If compromise is suspected, rotate the ASP.NET machine key and restart IIS across nodes.

Practical stance: if you found evidence of config access, assume secrets are burned. Rotate keys, then investigate. Don’t investigate first and rotate later.

Hardening moves that pay off all year

Reduce blast radius of configuration exposure

  • Lock down read access to web.config and other sensitive configs at the filesystem and IIS levels.
  • Ensure the application pool identity has the minimum privileges required.
  • Centralize secrets in a vault where possible, instead of static config values.

Add “crypto misuse” checks into SDLC and procurement

If you build software:

  • Add SAST rules for hard-coded keys, IVs, and static salts.
  • Block builds when cryptographic material is embedded in binaries or config templates.

If you buy software:

  • Ask vendors directly how keys are generated, stored, and rotated.
  • Require evidence of secure key management and rotation mechanisms.

Hard-coded keys are not a “bug.” They’re a product decision. Treat them that way.

Instrument the app so AI can see what matters

AI detection is only as good as your signals. For internet-facing enterprise apps, prioritize:

  • Structured logs for auth decisions (who, what identity, why allowed)
  • Request IDs that flow across reverse proxy/WAF → app → storage actions
  • Normalized fields for token lifetime, creation time, and expiry

If the only thing you log is a 200 status code, you’re blind.

“People also ask” (and the answers you actually need)

Is a hard-coded key always exploitable?

Answer: Not always, but it’s always a serious design flaw. If the key protects anything an attacker can influence (tickets, cookies, tokens, encrypted URLs), exploitation is often straightforward.

Why did attackers target web.config specifically?

Answer: Because web.config can contain or expose the ASP.NET machine key, which is central to ViewState integrity and can enable deserialization-based RCE.

Can AI really detect this faster than rules?

Answer: Yes, when the exploit produces statistical outliers—like never-expiring tickets, identity fallback behavior, unusual endpoint access patterns, and replayed crafted URLs. Rules can catch known strings. AI can catch the pattern even when the string changes.

Where this fits in the AI in Cybersecurity story

Hard-coded keys are a reminder that “AI security” isn’t only about fighting AI-powered attackers. It’s about using AI to catch the kinds of failures that slip past process: insecure cryptography, brittle auth flows, and token formats that are easy to forge.

If you’re responsible for a SOC, this is the week to pressure-test your detection: could you spot a never-expiring authorization token being replayed against a storage endpoint? If not, you’re relying on luck and patch speed.

The reality? It’s simpler than it sounds: patch fast, rotate secrets when exposure is plausible, and use AI-driven anomaly detection to catch the next ‘static key’ incident before it becomes your incident. What signal would you want your team to see first—the CVE announcement, or the first forged ticket hitting your logs?