Air Force SaaS Rules Could Stall AI-Ready Software

AI in Defense & National Security••By 3L3C

New Air Force SaaS rules could slow AI-ready defense software. See what breaks, why it matters, and a practical path to secure speed.

defense softwareAir Force acquisitionSaaS policyAI governancecybersecurity procurementdata portability
Share:

Featured image for Air Force SaaS Rules Could Stall AI-Ready Software

Air Force SaaS Rules Could Stall AI-Ready Software

A single procurement memo can add months to delivery timelines. In defense software, months aren’t “just schedule”—they’re capability gaps. When a policy forces extra contracting actions, limits how teams can buy cloud tools, and discourages customization, it doesn’t just slow apps. It slows the feedback loops that make AI in defense practical: rapid iteration, data integration, continuous monitoring, and quick deployment to real operators.

That’s the core concern raised by recent Department of the Air Force guidance on software-as-a-service (SaaS). The Air Force has spent the last several years building a reputation as the military’s software pace-setter—software factories, shared cloud environments, and early adoption of continuous authorization approaches. This new direction risks flipping that story: from “ship improvements weekly” back to “wait for approvals.”

This matters for everyone working in AI in defense & national security—program offices, primes, non-traditional vendors, and investors—because AI systems don’t behave like static tools. They’re living systems. They improve (or degrade) based on data, model updates, workflow changes, and adversary adaptation.

The real issue: AI capability depends on iteration speed

AI-enabled mission planning, cybersecurity, and autonomy require constant iteration, not occasional upgrades. That’s not a preference; it’s how these systems stay relevant.

Modern AI operations typically involve:

  • Frequent updates to data pipelines and schemas
  • Model retraining and evaluation cycles
  • Prompt, policy, and guardrail changes for human-facing copilots
  • Interface tweaks based on operator feedback
  • Integration work across identity, logging, sensors, and command-and-control systems

When contracting and governance add friction to each change, the result is predictable: teams ship fewer improvements, less often. And when AI tools are frozen in “approved” configurations while the real world keeps changing, users route around them—or stop trusting them.

A useful rule of thumb I’ve seen hold up: If the acquisition path can’t support weekly or biweekly change, it won’t support operational AI at scale.

What the memo gets right about SaaS—and why it still misses the point

Consumption-based pricing is a good idea when it fits the product. Paying based on use aligns incentives: if the tool isn’t valuable, usage drops and cost drops. If it becomes essential, the vendor earns more.

But forcing one pricing model as the default for “SaaS” creates two practical problems:

1) Early pilots don’t know their “unit of value” yet

Early-stage defense pilots often start with ambiguous usage patterns:

  • How many analysts will actually use the tool daily?
  • What counts as a “transaction” in a mission workflow?
  • Is value in queries, alerts, or decisions improved?

If teams must define consumption units too early, they’ll either guess (leading to disputes later) or delay procurement (which kills momentum).

2) The SaaS label becomes a contracting fight

When policy hinges on whether something is “SaaS,” vendors and program offices spend time classifying offerings instead of fielding capability. Is it a subscription? A managed platform? A hosted application with optional services? These arguments don’t improve security or outcomes. They just add negotiation overhead.

Bottlenecks don’t reduce waste—they shift it into time

The memo’s most damaging pattern is centralization disguised as efficiency.

The intent—avoid fragmented purchasing and duplicated tools—makes sense. Defense organizations do waste money buying overlapping software.

But the method matters. Requiring separate contracting actions for SaaS and restricting purchase paths to a pre-approved enterprise catalog creates a catch-22:

  • New vendors and emerging tools aren’t in the catalog yet.
  • But you can’t buy them unless they are.

So teams must either:

  1. Start a long “get into the catalog” process, or
  2. Seek an exception through a narrow approval path

Both options concentrate decision-making at the top and slow experimentation at the edges—exactly where the Air Force has historically been strongest.

Here’s the operational consequence: you get fewer trials, fewer transitions from pilot to production, and more reliance on incumbents.

For AI, that’s especially risky. The AI tooling landscape evolves quickly—cyber anomaly detection, sensor fusion, model evaluation platforms, secure data labeling, synthetic data generation, and mission planning assistants all change fast. If the acquisition pathway can’t ingest new capabilities continuously, adversaries gain time.

Data portability: the goal is right, the language is dangerous

The memo also pushes for the ability to download, migrate, and access government data “in a usable format” at no additional cost.

The objective—reducing vendor lock-in—is legitimate. But “usable format” is a trap unless the Air Force defines it in operational terms.

A practical distinction:

  • Exportable data: files can be extracted (CSV, JSON, Parquet, etc.)
  • Operationally reusable data: exports include context, schema, lineage, and enough metadata to recreate workflows elsewhere

Many platforms make exportable data easy but operational reuse hard. Not because they’re malicious, but because value lives in the relationships: data models, permissions, transformations, dashboards, search indexes, and embedded business logic.

If policy language is vague, lawyers end up negotiating what “usable” means. That creates delays, increases compliance costs, and pushes vendors to overbuild export features that don’t actually solve mission problems.

A better approach for AI and national security data

For AI systems, portability should focus on interfaces and standards, not wishful thinking about “free” full-fidelity migrations:

  • Clear requirements for API access and bulk export options
  • Explicit expectations for schema documentation and data dictionaries
  • Defined service-level targets for export jobs (time to deliver, completeness)
  • Separation of government-owned data from vendor-generated derived artifacts (indexes, embeddings, feature stores)

The goal is switching options without pretending platforms are interchangeable.

The anti-pattern that breaks AI delivery: banning customization and extensions

The biggest red flag is the restriction on custom code development or modifications to extend functionality “beyond the platform’s original design,” including using APIs to add features not initially intended.

This is where software policy collides head-on with reality.

Defense AI systems are integrations first, models second. They live in ecosystems: identity, endpoints, sensors, messaging, data lakes, classification boundaries, and mission applications. API-based extension is not a “nice to have.” It’s how capabilities become operational.

If you prevent teams from building extensions, you get:

  • More swivel-chair operations (humans manually moving data between systems)
  • More shadow IT (operators using unsanctioned tools to fill gaps)
  • Less observability (harder to audit decisions and data flow)
  • Slower response to adversary tactics (especially in cyber)

Security doesn’t improve when you ban extensions. It improves when you control how extensions happen.

What “secure extensibility” looks like in practice

A more mature stance is: allow customization, but require guardrails.

Examples of guardrails that actually work for AI-enabled systems:

  • Approved integration patterns (service-to-service auth, secrets management, scoped tokens)
  • Continuous monitoring and logging requirements
  • Secure software supply chain controls (SBOMs, signed artifacts, dependency scanning)
  • A lightweight approval step for paid feature work tied to measurable outcomes

This keeps the platform secure while preserving the ability to adapt in days—not quarters.

Why this hits AI mission planning, cyber, and autonomy hardest

The policy risk isn’t evenly distributed. It concentrates pain on the parts of defense modernization that need agility the most.

AI-enabled mission planning

Mission planning tools increasingly blend data, constraints, and human judgment. Updates often involve:

  • New data sources (ISR feeds, logistics, airspace restrictions)
  • Workflow changes based on exercises or real-world operations
  • Model adjustments after evaluation reveals bias or failure modes

If every integration or workflow extension is blocked or slowed, mission planning AI becomes a demo—not a dependable operational capability.

AI-driven cybersecurity

Cyber defense is an adversarial learning problem. Attack patterns shift continuously. Detection logic, triage workflows, and automated response playbooks must change fast.

A policy that increases contracting actions and restricts extensions forces cyber teams into slower cycles. That’s not a paperwork problem; it’s a risk exposure problem.

Autonomous systems and human-machine teaming

Autonomy relies on software updates, simulation feedback, and field learning. Even when safety constraints are strict, the surrounding software—telemetry, testing harnesses, evaluation dashboards, and mission configuration—must evolve.

Blocking “custom code” around platforms tends to freeze the very tooling that makes autonomy safe and testable.

A practical path that reduces duplication without killing speed

Here’s a better way to get the benefits the memo wants—cost control, visibility, portability, and security—without kneecapping software innovation.

1) Treat enterprise catalogs as accelerators, not gates

Use catalogs to make buying easy, not to block buying.

  • Let teams buy outside the catalog for time-bound pilots (e.g., 90–180 days)
  • Require a fast “catalog intake” path for successful pilots
  • Track enterprise usage centrally without forcing centralized permission for everything

2) Standardize outcomes and controls, not contract shapes

Mandating specific contracting structures tends to create workarounds. Instead, define what must be true:

  • Audit logs and monitoring are accessible to the government
  • Data export requirements are testable in acceptance criteria
  • Security controls align with continuous monitoring practices

Then let contracting officers choose the right vehicle.

3) Make data portability measurable

“Usable format” should become acceptance tests:

  • Export completeness (% of records, fields, metadata)
  • Time-to-export SLA for defined data volumes
  • Documentation deliverables (schemas, transformations, lineage notes)

4) Replace “no customization” with “controlled extensibility”

Allow API-based extension and custom development under clear conditions:

  • Security review for integration patterns
  • Rate limits and permission scoping
  • Logging and retention requirements
  • COR approval for billable feature work

The point is to keep teams shipping while staying accountable.

What to do next (if you’re buying, building, or selling into the Air Force)

If you’re a program office or requirements owner:

  • Write requirements that assume change: budget and plan for monthly iteration, not annual upgrades.
  • Demand portability tests in the SOW: define what exports must include and how fast they must happen.
  • Push for controlled extensibility language: your operators will need integrations.

If you’re an AI or SaaS vendor:

  • Prepare a portability package: schema docs, export scripts, and clear separation of customer data vs derived artifacts.
  • Document your secure extension patterns: how APIs are authenticated, logged, and monitored.
  • Price pilots to reduce friction: make early evaluation easy before consumption metrics are fully understood.

If you’re a defense innovation leader:

  • Track “time to first value” as a KPI: days from requirement to fielded capability.
  • Treat contracting capacity as a constraint: policies that add contracting actions must show measurable payoff.

One-liner worth repeating: If policy can’t keep pace with software, it can’t keep pace with the fight.

The Air Force earned its software reputation by proving that secure delivery and rapid iteration can coexist. AI in defense is the next test of that belief. The organizations that win won’t be the ones with the most memos—they’ll be the ones that can update, evaluate, and field improvements continuously while staying secure.

What would change in your mission area if software updates took weeks instead of months—and what would break if the next policy made that impossible?

🇺🇸 Air Force SaaS Rules Could Stall AI-Ready Software - United States | 3L3C