Make Defense AI Faster: Fix Budget-to-Need Acquisition

AI in Defense & National SecurityBy 3L3C

Defense AI adoption is limited by budgeting and acquisition speed. Here’s how portfolio funding and accountability make “money follows need” real.

Defense AIDefense acquisitionPPBE reformPortfolio managementDoD budgetingNational security innovation
Share:

Featured image for Make Defense AI Faster: Fix Budget-to-Need Acquisition

Make Defense AI Faster: Fix Budget-to-Need Acquisition

Defense AI doesn’t fail in the lab. It fails in the budget drill.

Teams can train a model, harden it, test it in a realistic range environment, and prove it helps operators make better decisions. Then the program hits the wall: the money is fenced into the wrong color of funds, locked into the wrong line item, and scheduled on a timeline that assumes software changes once a year.

That’s why the most overlooked AI accelerator in national security isn’t a new model architecture—it’s acquisition transformation that actually sticks. The recent push for faster buying, more commercial adoption, longer program-manager tours, and portfolio-level thinking points in the right direction. But the part that determines whether any of it lasts is the quiet directive at the center of the whole effort: improve budget flexibility so money follows need.

This post is part of our AI in Defense & National Security series, and I’m going to take a stance: if the Department of Defense (and Congress) don’t modernize how money moves, most “AI at speed” initiatives will keep turning into pilot purgatory.

Why defense AI depends on acquisition reform (more than most people admit)

AI capabilities evolve on operational timelines, not five-year plans. That’s the core mismatch. Portfolio budgeting, flexible reprogramming, and outcome-based accountability aren’t finance nerd topics—they’re the plumbing required to field AI-enabled systems before the threat changes.

Three realities make AI uniquely sensitive to acquisition bottlenecks:

  1. AI systems degrade without continuous updates. Models drift, data pipelines break, adversaries adapt, and sensors change. AI is closer to an intelligence cycle than a traditional “buy it once” platform.
  2. The value is often in integration, not the model. The hard work is stitching AI into tactical networks, mission workflows, and human decision loops—work that needs fast contracting and fast funding shifts.
  3. AI risk is operational, not just technical. The biggest failures tend to be mismatched requirements, poor data governance, brittle sustainment plans, and incentives that reward compliance over outcomes.

So when acquisition leaders talk about speed, commercial buying practices, and empowering portfolio managers, they’re implicitly talking about what AI needs: continuous adaptation at scale.

The four shifts underway—and what they mean for AI programs

The acquisition transformation conversation is really four interlocking shifts: requirements, budgeting, execution, and workforce. Each one has a direct AI consequence.

1) Requirements that come from operators, not paperwork

AI requirements should be expressed as measurable decisions and effects, not feature lists. If frontline units have more influence over what’s needed, AI programs can move from abstract “situational awareness” promises to concrete operational outcomes.

A practical pattern I’ve seen work: write requirements around decision advantage.

  • What decision is being improved?
  • At what echelon (tactical, operational, strategic)?
  • With what latency tolerance (seconds, minutes, hours)?
  • Against what deception and data-denial conditions?

That kind of operator-shaped requirement prevents a common failure mode: buying an impressive model that doesn’t survive the first contested deployment.

2) Programming and budgets that match technology velocity

The annual appropriation cycle and rigid line-item structures are slow by design. That’s acceptable for stable, decades-long platform programs. It’s toxic for AI, where capability improvements are iterative and often quarterly.

Portfolio funding is the right direction because it:

  • lets you shift resources from a failed approach to a better one without waiting a year
  • supports mixed approaches (commercial tools + government integration + test infrastructure)
  • funds AI as a lifecycle (data, compute, deployment, monitoring), not as a one-time “development” event

If the FY 2027 budget is the first budget built to reinforce these reforms, it’s also the first real test: does the budget request reflect how the department claims it wants to buy and update capabilities?

3) Execution that tolerates smart risk instead of punishing learning

Most “risk reduction” in defense acquisition is paperwork that arrives after the risk has already moved. For AI, the only honest way to manage risk is through:

  • rapid prototyping with real users
  • red-teaming models and data pipelines
  • continuous evaluation in representative environments
  • quick termination of approaches that don’t work

That last point matters. AI portfolios should expect attrition: some models will underperform; some vendors will fail to deliver; some data sources will become unreliable. The system has to treat fast failure as a feature, not a scandal.

4) Workforce and org design that can run portfolios

Portfolio acquisition is a different job than program-element babysitting. You need leaders who can balance:

  • operational tradeoffs (what helps the mission this quarter?)
  • technical tradeoffs (data quality vs. model complexity vs. compute constraints)
  • contracting and vendor strategy (avoid lock-in, keep competition alive)
  • governance and safety (especially for autonomy and targeting-adjacent use cases)

Longer tours for program managers help. But portfolio models also demand tight integration between acquisition, comptroller/budget teams, cyber authorities, test organizations, and operational sponsors.

If finance and acquisition teams aren’t embedded with the portfolio decision-makers, the result is predictable: the “portfolio” becomes branding while money still behaves like it’s 2005.

“Money follows need”: what has to change for AI portfolios to work

The slogan is easy. The mechanics are hard. If you want flexible portfolios without giving Congress a blank check, the answer isn’t less oversight—it’s better oversight designed for modern systems.

Here are five changes that make “money follows need” real for AI in defense.

1) Budget structure: collapse programs into mission portfolios

AI-enabled capabilities should be budgeted as mission portfolios, not scattered across dozens of micro-lines. A portfolio should cover the lifecycle:

  • data acquisition and labeling
  • cloud/edge compute
  • model development and evaluation
  • integration into platforms and C2 systems
  • monitoring, patching, retraining, and cyber hardening

A concrete example structure (illustrative): instead of separating “UAS software,” “UAS sensors,” “UAS upgrades,” and “AI autonomy experiments” into disconnected lines, create an Unmanned Systems Portfolio with explicit sub-allocations that can shift within guardrails.

For AI, this also prevents a chronic sustainment failure: buying the “algorithm” but not paying for the data pipeline that keeps it trustworthy.

2) Reprogramming: replace tiny thresholds with percentage-based guardrails

Dollar thresholds for reprogramming were built for a different era. AI portfolios will vary widely in size, and fixed thresholds quickly become either meaningless (too small) or paralyzing (too strict).

A workable approach is tiered, percentage-based authority. For example:

  • up to 5% internal movement within a portfolio at the portfolio-manager level
  • up to 10% movement across portfolios within a service at the service financial leadership level
  • bounded cross-service movement at the department comptroller level with notification rules

The point isn’t the exact number. The point is predictable agility with clearly defined ceilings.

3) Accountability: measure outcomes, not compliance artifacts

AI acquisition accountability should track operational outcomes and delivery tempo. Process compliance alone doesn’t tell Congress (or taxpayers) whether flexibility is producing value.

Portfolio reporting should include a small set of metrics that are hard to game:

  • speed-to-field: time from validated need to deployed capability
  • update cadence: how often models and software are safely updated in production
  • mission effect: measurable changes in detection time, false alarm rates, targeting cycle time, or analyst throughput (depending on use case)
  • cost variance: forecast vs. actual run-rate for compute, data, and sustainment
  • operational reliability: uptime, latency, and performance under degraded comms

This is where AI can actually help acquisition: automated telemetry from deployed systems can feed dashboards that show performance trends without waiting for end-of-year reports.

4) Workforce incentives: reward teams that ship and sustain

If promotions reward clean paperwork more than delivered capability, the reform will snap back. For AI portfolios, incentives should favor:

  • shipping small increments frequently
  • retiring legacy tools when better ones exist
  • building reusable data and test infrastructure
  • keeping competitive pressure on vendors

One practice worth institutionalizing: short, structured industry immersions for acquisition and budget professionals focused on modern product delivery—how commercial teams run iteration, incident response, security patching, and continuous deployment.

5) Congressional engagement: transparency is the price of flexibility

Budget flexibility without trust becomes a political dead end. If the department reduces engagement with Congress while requesting more flexible money, appropriators will assume (reasonably) that oversight is being evaded.

A better bargain is straightforward:

Flexibility should be paired with real-time visibility.

That means shared portfolio dashboards, routine briefings tied to outcome metrics, and clear rules for when notification is required.

For AI in defense, this transparency has a second benefit: it forces programs to show that safety, security, and testing are real, not checkbox theater.

Three lessons for leaders trying to field AI faster (without losing control)

You can get speed and stewardship at the same time, but only if you design for it. Here are three practical lessons that apply immediately to AI integration.

  1. Treat AI as a sustainment-heavy capability from day one. If the budget doesn’t include data operations, monitoring, retraining, and cyber hardening, you’re not buying AI—you’re buying a demo.
  2. Use portfolios to fund options, not single bets. Competitive prototyping and parallel vendor paths reduce risk faster than trying to “get it right” in requirements documents.
  3. Build oversight around telemetry and outcomes. If a committee can see delivery tempo, performance, and cost run-rate continuously, it’s easier to grant flexibility.

What “success” looks like by FY 2027

A lasting transformation will show up as boring consistency: money moves to what works, and failing approaches stop quickly. By the time the FY 2027 cycle is underway, you should be able to point to evidence like:

  • AI-enabled systems updated safely on a predictable cadence (monthly or quarterly, depending on mission)
  • portfolio managers empowered to shift funds without months of approvals
  • acquisition teams using commercial contracting patterns where they fit, without sacrificing security
  • Congress receiving clearer, more operationally meaningful reporting than today’s line-by-line churn

The strategic stakes aren’t abstract. Peer competitors can field capability faster because their industrial systems and decision rights are aligned for speed. Meanwhile, recent conflicts have shown that innovation happens closest to the fight—and the organizations that win are the ones that can adapt continuously.

If defense leaders want AI that matters in real operations, acquisition transformation can’t be a memo. It has to be a financial architecture, a workforce model, and a trust compact with Congress.

If you’re building or buying defense AI right now, the most useful question isn’t “Which model should we pick?” It’s this: Are we funded and governed to update this capability as fast as the threat will force us to?

🇺🇸 Make Defense AI Faster: Fix Budget-to-Need Acquisition - United States | 3L3C