Serialization RCEs keep repeating. Learn how AI in cybersecurity helps spot hidden deserialization risk in Next.js/React and respond faster to exploits.

Stop Serialization RCEs: AI Can Break the 10-Year Cycle
A decade apart, two different ecosystems tripped over the same rake.
In 2015, Java shops learned the hard way that unsafe deserialization can turn “data” into remote code execution. In late 2025, public exploit scripts appeared within days for high-severity React/Next.js issues tied to React Server Components (CVE-2025-55182 and CVE-2025-66478). Different language. Different frameworks. Same failure mode: deserializing untrusted input across a trust boundary.
This post is part of our AI in Cybersecurity series, and I’m going to take a stance: we won’t break the serialization RCE cycle with better blog posts or another annual secure coding training. We break it by building security “reflexes” into the places developers actually work—framework defaults, CI pipelines, runtime defenses, and yes, AI assistants that can spot risky patterns before they ship.
Why serialization bugs keep coming back
Serialization vulnerabilities persist because they’re the shortest path between “it works” and “it’s in production.” Passing complex objects between client and server, or service to service, is a daily need. Serialization makes it feel effortless.
The trap is that many serialization mechanisms do more than move data. They reconstruct objects—sometimes with behavior—and that’s where attackers win. If an attacker can influence the serialized payload (directly or indirectly), they can often steer the deserializer into executing code, invoking gadget chains, or corrupting prototypes and object graphs.
Convenience beats caution (especially under deadline)
Most orgs still reward shipping features more than preventing classes of bugs. When a framework offers a “magic” abstraction, developers accept it—because that’s rational behavior in a busy sprint.
The Next.js/React Server Components angle highlights something subtle: you may think you’re “just calling a function,” but under the hood you’re exercising a custom serialization protocol. If you don’t know where serialization is happening, you won’t threat model it.
Framework abstraction hides the blast radius
Abstractions are good—until they hide security-sensitive mechanics:
- Developers see a helper or Server Action.
- The framework sees a structured payload crossing a boundary.
- Attackers see a parser and a deserializer they can shape.
When the dangerous part is invisible, you don’t get the normal “this feels sketchy” developer instinct. That’s why these issues survive language migrations: the risk doesn’t transfer with the abstraction.
The ecosystem doesn’t learn collectively
Java teams built institutional memory about unsafe deserialization (gadget chains, ObjectInputStream, and the ugly ways “data” becomes “execution”). But that knowledge didn’t automatically migrate to the Node.js/React world a decade later.
Security knowledge is still too siloed:
- One community gets burned.
- They patch, write guidance, update lint rules.
- Another community repeats the pattern in a different stack.
What changed in 2025: time-to-exploitation collapsed
Exploit availability is now a timing problem, not a rarity problem. In 2015, exploit chatter might surface on niche forums weeks before broad weaponization. In 2025, it’s common to see multiple public repositories with functional exploit code shortly after disclosure.
Here’s the real operational consequence:
Time-to-exploitation is approaching the time it takes your team to read the advisory and schedule the patch.
Now add agentic automation (attack workflows that can adapt, test targets, and iterate without waiting for a human). The window compresses again.
For defenders, this changes the order of operations:
- Know whether you’re exposed (asset intelligence and app fingerprinting).
- Apply compensating controls immediately (WAF rules, feature flags, traffic shaping).
- Patch fast, but don’t pretend patching is “the first move.”
What defenders should do for Next.js/React deserialization RCE
Treat RCE in a web framework as “assume compromise” until proven otherwise. The reason is simple: once an attacker executes code, they can pivot faster than most teams can investigate.
1) Get precise about your attack surface
Attackers can differentiate Next.js App Router targets from Pages Router sites by checking for specific client-side markers. You should already be able to answer—per app, per environment:
- Are we on App Router or Pages Router?
- Are Server Actions enabled?
- Which endpoints receive Server Action requests?
- Which versions of React Server DOM packages are deployed?
If your inventory can’t answer that in minutes, you’re not doing “asset management”—you’re doing archaeology.
2) Reduce exposure fast: disable what you don’t use
If you’re not using Server Actions, disable them. Security teams love mitigations that are reversible and low-impact, and feature flags are often your best friend during the first 24–72 hours after disclosure.
If you are using Server Actions, focus controls on the entry points that matter:
- Gate and monitor the endpoints targeted via
Next-Actionheaders. - Apply strict request constraints (size limits, content-type validation, multipart rules).
3) Hunt for exploit-shaped traffic
You’re looking for requests that don’t match your application’s normal behavioral profile. Practical signals worth hunting in logs and your SIEM:
- Anomalous POST requests that include a
Next-Actionheader - Multipart payload patterns that target
__proto__or other prototype-manipulation structures - Unusual serialized JSON structures inconsistent with your normal RSC traffic
- Error responses followed by suspicious client behavior (exfil sometimes appears encoded, including base64 patterns in error digests)
A blunt but effective approach I’ve used: build a “known-good” baseline for Server Action traffic (routes, sizes, content types, typical parameters). Then alert on anything that deviates materially.
4) Assume post-exploitation actions are already underway
If an attacker gets RCE, the first moves are usually boring—and devastating:
- Credential harvesting from environment variables
- Cloud lateral movement via metadata endpoints
- Persistence via cron jobs, scheduled tasks, startup scripts, or CI secrets
Your incident response playbook should treat this as a full compromise scenario:
- Rotate secrets (not just passwords—API keys, OAuth tokens, signing keys)
- Re-issue session cookies and invalidate tokens
- Review outbound traffic for new destinations
- Audit CI/CD pipelines for tampering
Where AI helps: preventing “invisible” serialization risk
AI’s real value here isn’t magical detection. It’s reducing how often humans miss the obvious. Serialization vulnerabilities thrive in the gap between framework complexity and human attention.
AI can spot risky patterns during development
If your team uses AI coding assistants, you need them to do more than autocomplete.
A useful AI assistant in this context:
- Flags when code crosses a trust boundary (client input → server action → parser)
- Identifies potentially dangerous deserialization usage, even when wrapped by a framework helper
- Suggests safer “data-only” formats and explicit object construction
- Generates validation schemas alongside DTOs (rather than leaving validation as an afterthought)
One practical tactic: add a required prompt template for AI-assisted code changes that touch request parsing or serialization:
- “List trust boundaries this code crosses.”
- “What input validation happens before parsing?”
- “Does this reconstruct objects with behavior?”
- “What would exploitation look like in logs?”
This feels small, but it forces the assistant to surface security context instead of only optimizing for speed.
AI can improve detection with behavior-based signals
At runtime, defenders often struggle because payloads evolve quickly. Static signatures get stale.
AI-guided detection can help by learning “normal” for your app and alerting on deviations:
- atypical request shapes for Server Actions
- unusual header combinations
- payload entropy spikes (compressed/encoded blobs)
- sudden increases in 4xx/5xx around specific endpoints
That’s especially valuable when the framework protocol is specialized (like the Flight/RSC ecosystem), because generic WAF rules often miss protocol-specific weirdness.
AI can turn institutional security knowledge into defaults
The bigger win is upstream: make the safe path the easy path. This is where AI in cybersecurity can influence engineering systems, not just analysts:
- Secure-by-default framework settings (and loud warnings when you deviate)
- Generated schema validation stubs for request handlers
- CI checks that fail builds when unsafe deserialization patterns appear
- PR reviews where an AI agent highlights trust-boundary violations with concrete, code-level explanations
If you want fewer serialization bugs, stop relying on everyone remembering the same lesson forever.
A practical playbook: “dumb data + explicit objects”
If your serialization format can reconstruct arbitrary types, you’re taking on unnecessary risk. The safer pattern is simple and repeatable:
Prefer data-only formats
Pick formats that parse into primitives and plain structures:
- JSON
- Protocol Buffers
- MessagePack
- CBOR
- FlatBuffers
These formats aren’t “automatically secure,” but they remove the most dangerous feature: object rehydration with behavior.
Validate with schemas before business logic
Schema validation is the line that stops “weird but parseable” input from becoming “weird and exploitable” behavior.
Common approaches:
- JSON Schema
- Zod/Yup in JavaScript
- Pydantic/marshmallow in Python
A rule I like: no handler touches application logic until validation passes. Not “validate later.” Not “sanitize a field or two.” Validate first.
Construct objects explicitly
Even after parsing and validation, treat object creation as a deliberate step:
- Parse into a plain data structure.
- Validate shape and constraints.
- Construct domain objects explicitly.
That explicit construction is where you control behavior. Attackers hate that.
“Everyone is a coder now” — and security has to keep up
AI-assisted coding is speeding up software delivery across every department. That’s not a threat by itself; it’s just reality in December 2025.
The risk is that LLMs happily output whatever pattern looks common, including unsafe serialization examples that appear all over old tutorials and repositories. If someone asks for “the fastest way to store and reload objects,” they can get an insecure answer unless they ask the question with security context.
Here’s what works in practice:
- Treat AI-generated code like untrusted input until reviewed
- Make “secure alternative” prompts the default in engineering workflows
- Train reviewers to look specifically for trust boundaries, parsing, and deserialization
- Use automated checks (SAST, dependency scanning, and targeted lint rules) as a backstop
The most valuable engineers won’t be the ones who type fastest. They’ll be the ones who can use AI for scale and reliably catch the risky parts the model glosses over.
What to do next (if you want fewer 2 a.m. patch drills)
Serialization RCEs aren’t going away on their own. The bug survives because it hides inside convenience.
If you own security for a Next.js/React-heavy environment, take three steps this week:
- Inventory where App Router, Server Actions, and RSC-related packages exist (prod and non-prod).
- Instrument and alert on Server Action traffic patterns, especially
Next-Actionheader usage and abnormal POST shapes. - Add AI-assisted guardrails: schema validation templates, CI checks for deserialization anti-patterns, and PR review prompts that force trust-boundary analysis.
AI in cybersecurity is at its best when it shortens the gap between “we shipped it” and “we realized it was dangerous.” The question worth asking your team now is simple: what security decisions are still trapped in human memory instead of encoded into your tooling?