Special operators need expanded EW ranges to train AI-enabled drones under real jamming. Here’s what AI-ready testing should look like in 2026.

Bigger EW Ranges: The Missing Piece for AI Drone Readiness
Ten. That’s how many public advisories the FAA issued for GPS interruptions in 2025—an indirect measure of how rarely the U.S. can practice operating in a truly GPS-denied environment at scale.
Meanwhile, real battlefields have moved on. In Ukraine, electronic warfare (EW) isn’t a special scenario—it’s the default. Drones lose GPS. Links get jammed. Operators adapt with fiber-tethered control, alternate navigation, and more onboard autonomy. If you’re responsible for readiness, acquisition, or training pipelines, the takeaway is blunt: the U.S. can’t build AI-enabled drone and EW capability on “mostly permissive” ranges and expect it to work in contested spectrum.
This post is part of our AI in Defense & National Security series, and it focuses on a practical bottleneck that doesn’t get enough attention in AI conversations: test and training infrastructure. Special operations trainers are now pushing regulators to expand where the military can jam cellular and GPS signals. That push isn’t just about “more space.” It’s about creating the conditions where AI-enabled autonomy, mission planning, and resilient communications can be trained, evaluated, and trusted.
Expanded EW ranges are infrastructure for AI, not a nice-to-have
Answer first: If you want AI-enabled drones and electronic warfare to perform under jamming, you need ranges that allow realistic jamming. Without that, AI models get trained and validated on the wrong data.
Range constraints aren’t a minor scheduling inconvenience—they shape what systems become fieldable. When training sites can’t legally or safely simulate powerful GPS and cellular denial, programs drift toward brittle assumptions: reliable timing, clean GNSS, stable command links, predictable RF environments. That’s exactly the opposite of what adversaries will offer.
Special operations leaders are now signaling that they’re ready for “uncomfortable discussions” with civil regulators to carve out airspace and spectrum permissions for modern training. The operational logic is straightforward:
- EW changes the drone problem from “fly here” to “figure it out.” Under heavy jamming, autonomy isn’t a buzzword; it’s the only way the mission continues.
- AI systems learn what you let them see. If the U.S. can’t generate representative contested-spectrum data at scale, model performance claims become optimistic at best.
- Readiness is a systems problem. Hardware, software, spectrum access, authorities, safety cases, and data pipelines all have to line up.
The U.S. has historically treated GPS as something we control. Commercial dependence on GPS—and the legal protections around civilian spectrum—created processes that made sense for peacetime. But modern conflict doesn’t ask permission.
What Ukraine proved: autonomy and EW are now tied at the hip
Answer first: Ukraine demonstrated that EW pressure is accelerating autonomy, and autonomy is reshaping EW.
On a contested front, drones fail in predictable ways: navigation drift, link loss, spoofing, and latency spikes. Operators respond with engineering workarounds—some elegant, some ugly:
Fiber-tethered drones are a symptom of a bigger shift
Fiber-controlled drones (often discussed in open reporting) are an adaptation to heavy jamming. They reduce reliance on RF links, but they add constraints: tether management, range limits, physical vulnerability, and logistics complexity.
The deeper point is what the tether represents: a battle between control links and denial capabilities. When denial is strong enough, designers trade freedom for reliability.
Onboard target recognition changes the EW equation
Open reporting also describes drones using high-powered chips and onboard processing to recognize targets by shape/size—less dependent on continuous comms or pristine navigation. You don’t need to debate the exact performance of any one system to see the trend line: the more sensing and decision-making you push onboard, the less useful jamming becomes as a single-point solution.
That’s where AI matters most in electronic warfare:
- Perception: identifying vehicles, emitters, decoys, and terrain features with partial data
- Inference under uncertainty: selecting actions when GPS is degraded and links are intermittent
- Adaptation: updating behavior based on new jamming patterns, not last month’s lab conditions
But those capabilities aren’t “trained into existence” by slide decks. They require exposure to messy, contested environments—repeatedly.
Why the U.S. training pipeline is hitting a wall
Answer first: The U.S. can train EW concepts almost anywhere, but it can only train realistic effects in a handful of places—and that’s the wall.
According to a Joint Staff manual referenced in the source article, the U.S. regularly conducts cellular and GPS jamming exercises or experiments at two primary sites: White Sands Missile Range and the Nevada Test and Training Range. There are occasional permissions elsewhere, but they require FAA approvals and public advisories.
Two sites can’t cover the growing demand for:
- Special operations training pipelines
- Service-wide EW modernization
- Drone development and rapid modification cycles
- AI model training/validation at operational tempo
The real problem isn’t “jammers”—it’s authorities and process
Most people assume the bottleneck is equipment. Often it’s not. The bottleneck is the right to create effects in the RF environment without unacceptable risk to aviation, emergency services, and civilian infrastructure.
That drives a predictable anti-pattern:
- Developers test autonomy in clean conditions.
- They validate against limited jamming scenarios.
- They field systems that look great in demos.
- Operators discover edge cases under real jamming.
- Fixes come late, and confidence erodes.
A better way is to accept that contested-spectrum training is a prerequisite for credible autonomy.
“Build it in the field” is now part of the doctrine—and it needs space
Special operations training leaders have also highlighted the need to create and modify drones during training. That’s a big deal. It implies a shift toward rapid iteration: changing payloads, control software, autonomy behaviors, and EW countermeasures during exercises—not after a long engineering cycle.
That workflow requires:
- secure compute and data handling on range
- repeatable test conditions plus variability injection
- instrumentation (telemetry, RF recording, ground truth)
- a permissions framework that doesn’t take months per event
If the training environment can’t legally reproduce GPS denial, you can’t stress the parts of autonomy that matter.
What “AI-enabled EW training” should look like in 2026
Answer first: The goal isn’t more explosions or louder jammers—it’s a data-rich, instrumented, repeatable contested-spectrum environment where AI systems can be evaluated like flight safety software.
Here’s what I’d look for if I were advising a program office or a training command building an AI-ready range plan.
1) Instrument the spectrum like a test asset
EW ranges should treat the RF environment as something you measure, record, and replay.
- Record wideband IQ (where permitted) and metadata about jamming events
- Create a library of “jamming profiles” tied to training objectives
- Capture ground truth: what actually happened vs. what the system believed happened
Why it matters for AI: you can’t improve what you can’t label. AI performance in EW is often limited by weak ground truth and sparse event data.
2) Move from scripted scenarios to adversary-style red teaming
Modern EW isn’t a single jammer on/off switch. It’s dynamic, adaptive, and sometimes subtle.
AI-enabled training should include:
- changing jammer power levels and waveforms mid-mission
- spoofing and meaconing patterns that evolve
- intermittent link degradation that forces autonomy to manage uncertainty
- decoys and emission control (EMCON) constraints
This is where AI can shine—if it’s been trained and tested against realistic variability.
3) Evaluate autonomy with operationally meaningful metrics
Don’t grade autonomy like a science fair project. Grade it like mission assurance.
Useful metrics include:
- Mission completion rate under defined EW pressure (percentage)
- Time-to-recover from GNSS loss (seconds)
- Navigation error growth rate during denial (meters/minute)
- Operator workload during degraded comms (task load index)
- Fratricide/near-miss risk controls when classification confidence drops
These metrics create a shared language between operators, testers, and acquisition leaders.
4) Build safety cases that regulators can say “yes” to
The fastest way to stall expanded range access is to treat regulators as obstacles. They’re risk owners.
A credible approach includes:
- clear geographic and temporal boundaries for jamming
- aviation deconfliction plans and notification procedures
- fail-safe mechanisms and monitoring to prevent unintended spillover
- post-event reporting that proves compliance and supports future approvals
If you want more approvals, you need fewer surprises.
The lead-generation reality: who can actually help solve this?
Answer first: Expanded EW ranges are a multi-stakeholder build—training commands can’t do it alone.
This is one of those defense problems that looks “military,” but it’s operationally entangled with civilian systems. Organizations that tend to be most useful include:
- Range instrumentation and test engineering teams (data capture, scoring, telemetry)
- Spectrum engineering and RF safety specialists (containment, modeling, compliance)
- AI/ML teams experienced with edge autonomy (degraded comms, uncertain navigation)
- Secure network architects supporting on-range data movement and storage
- Digital engineering groups that can connect lab, simulation, and live-range results
If your organization works in any of those lanes, this is a moment to engage—because the demand signal from special operations is getting loud.
One-liner worth keeping: If your autonomy can’t survive jamming in training, it won’t survive contact.
What to do next if you’re building AI for defense readiness
Special operations trainers are right to push for expanded EW and drone development ranges. This isn’t about making exercises more dramatic. It’s about creating credible, repeatable conditions where AI-enabled systems can prove they’re safe, resilient, and effective.
If you’re on the government side, the immediate next step is to treat range access as a program enabler: fund the approvals work, instrument the range, and align training objectives with measurable autonomy and EW outcomes.
If you’re an industry or research partner, look for projects that combine contested-spectrum testing + data generation + autonomy evaluation. That trio is where readiness gets real.
The next year will likely decide whether the U.S. scales contested-spectrum training beyond a couple of flagship sites—or keeps teaching tomorrow’s drone operators in yesterday’s RF conditions. Which path are we choosing?