Algorithmic counterinsurgency is spreading. Learn what Gaza signals for AI targeting, civilian harm risk, and the governance Western militaries need now.
Algorithmic Counterinsurgency: What Gaza Signals
Fifteen thousand strikes in 35 days is a tempo humans can’t sustain. That number alone explains why AI in defense and national security is no longer a “future capability” conversation—it’s already shaping how targeting works, how accountability is assigned, and how quickly militaries can move from detection to lethal action.
Israel’s war in Gaza has become the most cited real-world case study of algorithmic warfare: AI-assisted target nomination, rapid approval cycles, and industrial-scale sensor-to-shooter operations. Whether you see the campaign as a warning, a model, or both, the operational lesson is the same: AI changes the limiting factor in warfare from information scarcity to judgment under speed.
This post is part of our AI in Defense & National Security series, and I’m going to take a clear stance: Western militaries will adopt more algorithmic counterinsurgency features—because the incentives are strong—but the version that spreads will depend on governance. The difference between “decision advantage” and “automated harm at scale” is policy, process, and proof.
Algorithmic counterinsurgency is speed plus scale
Algorithmic counterinsurgency is counterinsurgency run at machine tempo, where AI prioritizes, scores, and nominates targets based on bulk data—often faster than humans can meaningfully challenge. The key shift isn’t that AI fires weapons by itself; it’s that AI compresses the human role into a brief approval step.
In classic counterinsurgency (COIN), militaries balance force with legitimacy—often emphasizing population security, intelligence networks, and political strategy. Israel’s approach in Gaza, as widely reported and debated, illustrates a different model: coercive control plus algorithmic acceleration. Systems commonly discussed in public reporting (including tools described as automating target identification and nomination) appear to fuse:
- Communications metadata (calls, devices, subscriber patterns)
- Social network associations (who knows whom, who is near whom)
- Signals intelligence and intercepted traffic
- Geolocation and pattern-of-life indicators
The operational payoff is obvious: you can generate target lists rapidly and maintain a strike tempo that would otherwise be limited by analyst hours.
The strategic cost is also obvious: when suspicion becomes a numeric score, “distinction” can quietly degrade into “probabilistic association.” That’s not a theoretical ethics seminar point. It’s a practical systems-engineering issue: the faster you go, the more you rely on proxies.
Three failure modes show up fast in high-tempo targeting
Israel’s experience highlights three fragilities that any Western military should assume will appear in its own AI-enabled operations:
-
Compression risk (review becomes ritual) When the detection-to-strike interval collapses, human review can turn into checkbox compliance. The moment you measure performance as “targets serviced per day,” your process will adapt to produce throughput.
-
Scale risk (lowered evidentiary thresholds) Mass nomination encourages “good enough” certainty. Even if the model is accurate on average, the tail risk matters because the tail is where civilian harm concentrates.
-
Error externalization (blame shifts to the model) When something goes wrong, organizations can treat the model as an objective actor: the system flagged it. That’s culturally seductive—and corrosive for accountability.
If you’re building AI for defense, you should treat these as design constraints, not PR issues.
Why Western militaries are tempted to adopt this model
Western adoption pressure comes from three places: adversary tempo, coalition interoperability, and procurement economics. Gaza didn’t invent AI targeting. It provided a high-intensity demonstration of what happens when AI is integrated across the whole targeting pipeline.
1) The operational math favors automation
Modern battlefields are saturated with sensors: drones, ISR feeds, electronic emissions, commercial imagery, open-source reporting, and cyber telemetry. Humans are the bottleneck.
So defense organizations push toward:
- Automated detection and triage
- Rapid fusion across classified and tactical networks
- “Sensor-to-shooter” orchestration
The U.S. experience with computer vision and assisted analysis (often discussed under the broad umbrella of algorithmic warfare programs) fits this arc: reduce analyst burden, prioritize what matters, speed decisions.
But Gaza puts a spotlight on the missing sentence in most procurement briefs: speed without disciplined judgment isn’t decision superiority—it’s faster uncertainty.
2) Interoperability pulls doctrine along with tech
Alliances share data, platforms, and targeting processes. If one partner demonstrates a high-tempo targeting workflow, others feel pressure to align—especially when they expect to fight together.
Interoperability has a hidden effect: it standardizes assumptions. If a coalition shares scoring features, watchlists, or nomination logic, the coalition also inherits each other’s proxy choices and risk tolerance.
3) “Battle-tested” sells—especially in a tight budget year
Defense procurement in 2025 has two simultaneous realities: more demand (Ukraine lessons, Indo-Pacific pacing threats, drone proliferation) and more scrutiny (civilian harm, surveillance concerns, export politics).
In that environment, vendors that can credibly claim operational validation have an advantage. The risk is that the market rewards performance metrics that are easy to demonstrate (tempo, strike count, detections) more than metrics that are harder to verify (lawful distinction quality, proportionality discipline, post-strike truth).
AI in targeting can reduce harm—only under strict conditions
AI can reduce civilian harm compared to purely manual targeting, but only if the model and the rules of engagement are built for restraint. This is where most public debates get sloppy: people argue about “AI good” vs “AI bad,” when the real variable is configuration.
Here’s what tends to be true in practice:
- Humans make errors under stress, anger, fatigue, and information overload.
- AI can process more data consistently and flag anomalies humans miss.
- AI also scales bad assumptions efficiently.
So the deciding question becomes: what assumptions are you scaling?
If your operational policy quietly treats “military-age male” as a strong proxy for combatancy, AI will scale that bias. If your organization accepts high collateral thresholds for low-level targets, AI will simply help you hit more of them.
If, instead, you require multi-source confirmation and enforce conservative civilian harm ceilings—especially for residential strikes—AI can help you avoid acting on thin, single-source suspicion.
A practical “good AI targeting” checklist
If you’re responsible for AI governance inside defense or national security, these are non-negotiables:
- Provenance controls: you should know where each feature came from and how stale it is.
- Uncertainty visibility: confidence scores must be meaningful, calibrated, and auditable.
- Override friction: humans need both authority and time to say no.
- Policy-coded restraint: if the law or ROE requires additional checks for certain target types (schools, residences, protected sites), the workflow must enforce it.
- Post-strike learning loops: you need structured feedback—what was hit, who was harmed, what was wrong—so the system doesn’t fossilize errors.
Most organizations do only parts of this. Most companies selling “AI-enabled targeting” don’t want to be evaluated on all of it.
The real proliferation risk: wartime architectures migrating home
The most underestimated risk isn’t only battlefield use—it’s domestic reuse of the same AI surveillance stack. Once you’ve built infrastructure for bulk suspicion scoring, graph analysis, and identity resolution, it becomes tempting to point it inward.
Western democracies already have many of the ingredients:
- Dense CCTV networks
- Biometrics at borders
- Social media monitoring programs
- Phone tracking technologies and location brokers
If wartime AI normalizes “association equals risk,” domestic security can slide toward the same logic: network-based suspicion, predictive identification, and automated triage for detention or investigation.
This matters for national security leaders because it creates a backlash cycle:
- Domestic legitimacy erodes
- Intelligence sharing becomes politically contested
- Procurement becomes legally constrained
- Operational capability suffers
Algorithmic counterinsurgency doesn’t just raise moral questions. It can create strategic self-harm if governance lags behind capability.
What Western defense leaders should do now (five moves)
If AI is going to sit inside the kill chain, governments must require proof, not promises. Here are five concrete moves that translate “ethical AI” into enforceable practice.
1) Treat targeting-relevant AI as a controlled export category
Export controls should cover more than drones and missiles. They should include:
- Target development software
- Sensor-to-shooter orchestration modules
- Identity resolution and graph-based nomination tools
Licenses should require end-use conditions tied to auditability and civilian harm mitigation. If a buyer can’t produce logs and harm assessments, they shouldn’t get updates.
2) Mandate audit logs that reconstruct every AI-influenced strike
If an AI score influenced a lethal decision, a reviewer should be able to answer:
- What data sources were used?
- What features mattered most?
- What was the confidence and uncertainty?
- Who approved it, and what did they see?
- Were policy gates triggered (residential, protected sites, time-sensitive conditions)?
No logs, no legitimacy. Also: no logs means you can’t improve.
3) Codify proportionality guardrails into the workflow
Proportionality can’t remain a memo or a training slide. It needs to show up as workflow friction:
- Higher approval thresholds for residential strikes
- Mandatory pauses or second reviews for low-confidence nominations
- Weapon-selection constraints tied to civilian density
When leaders say “humans are accountable,” the system should reflect that in irreversible approval steps.
4) Stand up an independent civilian harm cell with real access
Many militaries do battle damage assessment. Fewer do credible civilian harm assessment with independent challenge.
A standing cell should have access to:
- Strike logs
- ISR clips and imagery
- Weapon effects data
- Casualty reporting pipelines
It should publish internal metrics that commanders can’t ignore: error rates, reversal rates, and harm trends by target type.
5) Push for a practical international “AI in targeting” baseline
Big treaties take years. A baseline instrument can move faster by focusing on minimum obligations when AI contributes to lethal decisions:
- Meaningful human review time
- Auditability and record retention
- Restrictions on bulk personal data use for lethal nomination
- Public reporting on civilian harm methodology
If democracies don’t set a floor, they’ll eventually inherit someone else’s.
What this means for defense AI teams and vendors
The winners in defense AI over the next five years won’t be the teams that ship the fastest demo. They’ll be the teams that can prove control. Procurement officers are increasingly being asked to justify not only performance, but governance.
If you’re building or buying AI for national defense, I’ve found it helps to frame requirements around three proofs:
- Proof of reliability: performance under realistic adversarial conditions, not clean lab data.
- Proof of restraint: measurable civilian harm mitigation and conservative failure behavior.
- Proof of accountability: reconstructable decisions with human-readable rationale and logs.
Gaza is often framed as a question of whether AI “caused” harm. The more useful framing for Western decision-makers is simpler: AI removed friction. If the friction you remove is careful judgment, the outcome is predictable.
Western militaries are going to absorb lessons from Israel’s algorithmic counterinsurgency—some explicit, some through vendors and interoperability. The open question is whether they’ll import the discipline along with the speed.
If you’re responsible for AI in defense and national security—policy, procurement, engineering, or oversight—now is the time to stress-test your targeting stack against the three known failure modes: compression, scale, and error externalization. The next conflict won’t wait for governance to catch up.
The future of algorithmic warfare won’t be decided by model architecture. It’ll be decided by what democracies are willing to require, verify, and refuse.
If you want help evaluating an AI-enabled targeting or ISR analytics program—technical governance, audit logging design, model risk controls, or procurement-ready requirements—reach out. What you measure and enforce today becomes the default behavior in wartime.