Defense SBIR is drifting toward repeat winners. See how AI-driven innovation management can improve fairness, find new entrants, and speed transition to mission use.

Fixing Defense SBIR: AI Can Stop R&D Funding Capture
Bureaucracy doesn’t just slow defense innovation—it selects for the wrong winners.
One of the clearest examples is the Defense Department’s Small Business Innovation Research (SBIR) pipeline. Between FY2005 and FY2024, DoD awarded $27.5B in SBIR Phase I and Phase II funding to 8,945 companies. Yet $4.1B (15%) went to just 25 “SBIR mills,” and the top five captured 5% of the entire DoD SBIR Phase I/II budget. That’s not a healthy innovation ecosystem. That’s a system optimized for repeat players.
This matters across the AI in Defense & National Security conversation because AI capabilities don’t arrive through press releases—they arrive through acquisition pathways that can identify real merit, fund it fast, and then get it into programs of record. If your R&D funnel rewards “bureaucratic fluency” over technical advantage, you don’t just waste money. You weaken readiness.
SBIR’s core failure: it rewards compliance over capability
SBIR is supposed to do two things well: bring new companies into defense and transition R&D into operational use. The reality is closer to an “application Olympics” where the most practiced proposal writers win.
How SBIR is meant to work (and where it breaks)
SBIR’s staged model is sensible on paper:
- Phase I: small checks (often $40K–$150K) to test feasibility and reduce technical risk.
- Phase II: larger awards to prototype, validate performance, and move toward adoption.
The breakdown happens in two places:
- Front door access: truly new entrants struggle to compete against firms engineered to win solicitations.
- Back end transition: many firms hit the “valley of death,” finishing Phase II with no buyer, no program sponsor, and no acquisition path.
A system that can’t reliably bring in new performers and can’t reliably transition prototypes is not an innovation engine. It’s a grant-making machine.
“Small business” standards that aren’t small
A big driver is eligibility. Under current rules, firms can qualify as “small” based largely on employee count (commonly under 500), even when annual revenue looks nothing like what most people would call small.
The result is predictable: large, well-networked, proposal-savvy companies stay eligible and hoover up awards that were designed to create space for startups.
If a program’s rules allow mature firms to compete in the same lane as fragile startups, the mature firms will win—and then call it meritocracy.
The “new entrant” gap is the biggest red flag
If SBIR were functioning as an on-ramp, you’d expect meaningful Phase I volume going to companies with no prior federal contracting history.
Instead, analysis of DoD Phase I awards over the last 20 years shows that less than 4% of Phase I funding went to companies with no prior federal experience.
That statistic should stop any serious defense innovation leader in their tracks.
Why this is especially dangerous for AI in defense
AI development cycles don’t match traditional defense cycles. Many of the best AI capabilities—data tooling, model monitoring, edge optimization, synthetic data, cyber analytics—are built by firms that:
- don’t start as federal contractors,
- don’t speak FAR fluently,
- move fast and pivot often,
- expect clear customer pull.
When the SBIR front door is effectively gated by paperwork endurance, defense loses access to the most dynamic portion of the AI supplier base. And the gap widens every year.
“SBIR mills” create a permanent valley of death
Here’s the uncomfortable truth: mills are not a side effect—mills are a rational outcome of the current incentive design.
If the easiest revenue is repeated Phase I/II wins, companies will optimize for repeated Phase I/II wins. The system teaches them to stay just under thresholds, spread across many topics, and keep the proposal machine running.
Transition performance isn’t being used like it should
One of the most damning operational issues is that program offices often aren’t required to track post-award outcomes in a way that changes future award decisions.
Publicly available data highlights how extreme this can look:
- One large recipient won $158M in DoD SBIR Phase I/II awards and only $17.3M in subsequent non-SBIR procurement.
- Another won $97M and only $1.2M after.
- Another won $115M and only $4.1M after.
Meanwhile, some firms with 12 or fewer SBIR awards generated more follow-on procurement than many high-volume mills.
The pattern is clear: repeat SBIR wins don’t reliably predict fielded capability.
What’s really being selected
A former Air Force leader described a dynamic many insiders recognize: evaluators see a proposal from a familiar mill, written in exactly the structure the bureaucracy expects, and they feel safe. Even when smaller firms submit technically stronger approaches, the mills often win because their proposals read “right.”
That’s not malice. It’s a predictable human response inside a compliance-heavy system.
The fix isn’t “try harder.” The fix is changing the operating model.
Where AI actually helps: turning SBIR into a data-driven portfolio
AI won’t magically make acquisition painless. But used correctly, AI can remove the two biggest distortions in the SBIR process:
- information asymmetry (who knows how to write for the system), and
- missing feedback loops (who actually transitions).
1) AI-assisted market intelligence for real small business discovery
Program offices can’t fund what they can’t find. The reality is that solicitations often fail to reach nontraditional performers.
An AI-enabled “supplier discovery” layer can:
- map commercial AI vendors and niche technical teams by capability,
- detect adjacent-use firms (e.g., health AI that fits medical readiness, logistics AI that fits contested sustainment),
- identify acquisition-ready signals (SOC2 posture, production deployments, security staffing),
- recommend outreach lists for Phase I topics.
This is how you get beyond the same Rolodex.
2) AI to score evidence, not writing quality
Most companies get this wrong: they think evaluation is about reading proposals faster. It’s not.
The win is shifting evaluation toward verifiable evidence:
- prior prototypes or demos,
- measured model performance (latency, accuracy, drift rates),
- red-team results for AI security,
- integration artifacts (APIs, data schemas, edge constraints),
- realistic transition plans tied to a user and system.
AI can help extract, normalize, and compare these signals so reviewers aren’t over-weighting polished narrative.
A practical approach I’ve seen work: use structured scoring rubrics where 60–70% of the score is evidence-based, and narrative is capped. Then use automated checks to flag missing evidence.
3) Portfolio analytics that expose mills early
Defense R&D needs portfolio management discipline. AI can support that by monitoring outcomes over time:
- award concentration by vendor,
- topic-sprawl patterns (dozens of unrelated areas),
- transition rates by unit, mission area, and PM shop,
- time-to-follow-on procurement,
- “repeat Phase II” cycles without adoption.
This enables something SBIR rarely does today: actively manage the portfolio toward national security outcomes.
4) Transition prediction and matchmaking (the underused superpower)
The valley of death isn’t a metaphor—it’s a coordination failure.
AI can help match Phase II performers to real pull by:
- linking prototypes to programs of record and capability gaps,
- identifying which commands have adjacent funding lines,
- recommending integration pathways (ATO strategy, edge deployment constraints, data rights posture),
- surfacing “next contract vehicles” that fit the maturity level.
If you want AI capabilities to reach operational units, you need a system that treats transition as a managed process—not a hopeful event.
The policy fixes are straightforward—AI makes them enforceable
Reform proposals discussed in the SBIR debate are not complicated. The hard part is enforcement and execution.
Here are three changes worth taking a stance on:
Enforce a real size standard
A revenue-based threshold (for example, under $40M annual revenue) aligns better with the public’s intuitive definition of “small” and protects the purpose of the program.
AI contributes by making eligibility verification less manual and less gameable (cross-checking corporate structures, affiliates, and revenue indicators).
Cap lifetime Phase I/II funding
A lifetime cap (for example, $75M) reduces dependency and forces successful firms to graduate into normal procurement.
AI contributes by providing real-time tracking across agencies and detecting affiliate workarounds.
Reserve Phase I awards for true new entrants
If less than 4% is going to firms with no prior federal experience, the system is telling you it’s closed.
A set-aside for new entrants isn’t charity. It’s maintaining a pipeline.
AI contributes by helping new entrants through clearer solicitations, structured templates, and compliance automation—reducing the “bureaucratic literacy tax.”
What leaders can do in Q1 2026 (without waiting for perfection)
SBIR’s authorization timing and reform debate create a rare window. If you’re a defense innovation leader, PM, or CTO-equivalent, there are immediate actions that don’t require years of policy work.
- Require transition metrics for every topic owner (even if imperfect): follow-on dollars, time-to-contract, unit adoption signals.
- Run an “evidence-first” pilot on a subset of topics: cap narrative scoring, weight proof.
- Stand up a supplier discovery sprint for AI-relevant mission areas: contested logistics, ISR processing, electronic warfare support, cyber defense.
- Publish a plain-English SBIR playbook for first-timers and measure whether the new-entrant share rises.
- Use portfolio analytics to identify high-concentration winners and set internal controls before they become permanent fixtures.
If you can’t explain why a company won without referencing proposal format, you didn’t select for innovation—you selected for compliance.
The bigger point for AI in Defense & National Security
Defense leaders talk a lot about adopting AI for mission planning, intelligence analysis, autonomy, and cybersecurity. Those priorities are real. But there’s a quieter dependency underneath: your innovation pipeline has to be able to choose, fund, and transition the right work.
SBIR should be one of the most powerful tools for pulling nontraditional AI companies into the national security ecosystem. Right now, the data shows it too often functions as a subsidy for repeat participants—while genuinely small firms either never enter or stall after Phase II.
If you’re serious about fielding AI at speed, treat SBIR reform as a readiness issue—and treat AI-driven innovation management as the practical way to make reform stick.
If you want help thinking through what an AI-enabled SBIR portfolio looks like in your mission area—scoring rubrics, evidence standards, transition analytics, and governance—build a plan before the next cycle begins. The next solicitation window will arrive on schedule. The question is whether the same winners will, too.