FMCSA crash rates show midsize fleets facing higher risk. Learn how AI safety monitoring and route optimization reduce crashes and improve compliance.

FMCSA Crash Rates: How AI Can Cut Fleet Risk
A 278-driver fleet posting an 11.87% year-to-date crash-per-driver rate isn’t a “bad month.” It’s a flashing dashboard light that says your safety system isn’t keeping up with your operation.
That number comes from a recent analysis of FMCSA crash data (through Nov. 30, 2025) that compared carriers by crashes per driver. The uncomfortable pattern: midsize fleets (250–500 drivers) are showing materially higher crash rates than larger fleets—often 7%+, and in some cases north of 11%, while large fleets (500+ drivers) cluster closer to 5–6%.
This matters for anyone who buys freight capacity, manages a fleet, or sits in risk and compliance. A higher crash rate isn’t just an insurance line item. It’s downtime, missed service commitments, nuclear verdict exposure, and brand damage. And if you’re in the middle tier—big enough to have complexity, not big enough to have an enterprise safety department—this is the exact moment where AI in trucking safety stops being optional and starts being practical.
What the FMCSA crash data really tells you (and what it doesn’t)
Answer first: FMCSA crash data is a strong signal for frequency risk, but it’s incomplete and it doesn’t prove fault.
The FMCSA’s Motor Carrier Management Information System (MCMIS) aggregates crash reports uploaded by states based on police accident reports. These are typically crashes involving fatalities, injuries, or towaways. In the FreightWaves analysis, carriers were normalized by driver count to estimate a crash rate metric: crashes YTD ÷ number of drivers.
The signal: normalized crash frequency
When you normalize by driver count, you get something closer to “how often do my drivers end up in reportable crashes?” That’s useful because raw crash counts mostly track size.
In the FMCSA-based analysis:
- Large carriers (500+ drivers): top crash rates were around 5–6%.
- Midsize carriers (250–500 drivers): crash rates commonly exceeded 7%, with outliers above 11%.
Even if you adjust for noise, that gap is too large to shrug off. The pattern points to execution differences—training, coaching cadence, maintenance discipline, dispatch pressure, route selection, and safety tech coverage.
The blind spots: under-reporting, exposure, and preventability
The source analysis flags three issues that any safety leader should keep in mind:
- Under-reporting can be significant (often cited as 30–40% nationally, varying by state). If a crash never makes it into the state pipeline, it never hits MCMIS.
- Driver counts can be stale because they come from MCS-150 updates. If a carrier reports fewer drivers than they actually run, their “per-driver” crash rate can look worse than reality.
- Crash involvement ≠fault. A carrier can be involved in a crash where they’re not culpable.
This is where the industry argument gets stuck. People say, “The data’s messy, so it’s useless.” I disagree.
Messy data is still actionable if you treat it like a risk indicator and pair it with better internal signals—telematics, ELD patterns, route context, inspection outcomes, and coaching history. That’s exactly what modern fleet intelligence systems are built to do.
Why midsize fleets are getting hit harder
Answer first: Midsize fleets often have complexity without the staffing and tooling to manage it, and safety becomes inconsistent across terminals, managers, and driver cohorts.
Large fleets tend to have mature safety operations: dedicated trainers, structured coaching programs, tighter maintenance controls, and broader adoption of in-cab safety technology. Midsize fleets frequently have good people and good intentions—but the operating model is fragile.
Here’s what I see most often in 250–500 driver operations:
Safety programs don’t scale by default
At 80 drivers, an experienced safety manager can “touch” most problems directly. At 300 drivers, you need systems:
- consistent onboarding and refresher training
- standardized incident review
- routine coaching that’s not just “after a wreck”
- performance visibility by terminal, lane, and driver cohort
Without that, you end up with pockets of excellence and pockets of chaos.
Dispatch pressure becomes the hidden risk multiplier
Holiday peak season (we’re in it right now, mid-December) makes this worse. Freight spikes, customers get less tolerant, and planners take more chances:
- tighter appointment windows
- riskier routing to “save time”
- pushing drivers closer to their limits
When safety is treated as a department instead of an operating principle, peak season exposes it.
Data friction blocks good decisions
Midsize fleets often have data scattered across:
- telematics portals
- camera vendor dashboards
- TMS/dispatch
- maintenance systems
- HR and training records
If the team can’t answer basic questions fast—“Which lanes are producing rear-end events?” “Which terminal has the worst following-distance trend?”—then you’re managing safety from anecdotes.
Where AI actually reduces crashes (not just paperwork)
Answer first: AI reduces crash risk by predicting who and what is likely to go wrong before a crash—then making the intervention easy: coaching, routing changes, maintenance actions, or policy enforcement.
AI in transportation gets overhyped when it’s pitched as magic. The practical version is simpler: pattern detection + prioritization + workflow.
AI-driven safety monitoring: turning near-misses into leading indicators
Cameras and telematics already capture leading indicators like:
- close following
- hard braking and sudden lane changes
- speeding relative to posted limits and conditions
- distraction cues (where permitted)
- risky merges, intersection behavior, and tailgating
AI helps in two ways:
- It classifies events consistently (so safety managers aren’t stuck reviewing hours of video).
- It predicts elevated risk by combining trends over time (e.g., a driver’s event rate rising week-over-week).
A fleet that waits for crashes to coach is choosing the most expensive training method.
Predictive route risk: safer lanes beat “shortest miles”
FMCSA crash data (and state crash patterns) can be blended with operational and environmental context:
- recurring congestion corridors
- weather frequency by time-of-day and season
- work zones and high-merge interchanges
- customer dwell patterns that push nighttime driving
This is where AI route optimization gets interesting: not just cost and ETA, but risk-weighted routing.
A simple stance: If you can cut exposure to the highest-risk 10% of road segments during your highest-risk hours, you’ll see fewer serious events. Not zero. Fewer.
Maintenance risk scoring: prevent “small failures” that become crashes
Brake issues, tire conditions, steering, and lights don’t just create roadside violations—they create instability and longer stopping distances.
AI can flag risk when it sees combinations like:
- repeated ABS warnings + harsh braking clusters
- tire pressure patterns + temperature swings
- increasing engine fault codes + declining fuel economy
The win isn’t prediction for prediction’s sake. The win is repairing the right truck before it becomes the next towaway crash report.
Better carrier vetting: AI helps shippers and brokers stop guessing
Answer first: AI improves carrier selection by combining FMCSA crash signals with current operating behavior, not just historical inspection-based scores.
The RSS article points out a core frustration: the CSA system relies heavily on inspection violations, while crash metrics (especially preventability-adjusted ones) aren’t fully public-facing.
From a buyer perspective, here’s the problem: if you’re only using standard, inspection-heavy scores, you can miss real-world risk.
An AI-assisted vetting approach typically looks like this:
- FMCSA crash frequency indicators (normalized by size)
- inspection and out-of-service trends
- lane/region risk matching (carrier performs well in Region A, poorly in Region B)
- claim and incident narratives (internal, where available)
- operational red flags (extreme appointment padding, excessive re-powering, unusual deadhead patterns)
The result isn’t “blacklist carriers.” It’s right-carrier, right-lane, right-time.
A practical rule: if a carrier’s crash frequency is elevated, don’t automatically walk away—start by limiting them to lanes where your model sees lower risk and higher operational slack.
A 30-day AI safety playbook for midsize fleets (250–500 drivers)
Answer first: Start with two leading indicators, one operational change, and one workflow you’ll actually run weekly.
If you’re trying to reduce crashes in Q1 2026, don’t begin with a six-month platform rollout. Begin with a tight loop.
Week 1: Clean the exposure numbers
- Update your MCS-150 driver counts so your crash-rate normalization isn’t distorted.
- Audit your crash records and challenge obvious errors through your established dispute process.
- Build one internal metric: crashes per million miles (if you have mileage) alongside crashes per driver.
Week 2: Pick two leading indicators you’ll manage
Good pairs for most fleets:
- following distance events
- speeding relative to posted limits
Set thresholds and review cadence. The goal is consistency, not perfection.
Week 3: Launch coaching that drivers will accept
Drivers tune out gotcha programs. Keep it specific:
- one 10-minute coaching session per flagged driver
- one behavior target per week
- show the clip, agree on the fix, document it
Week 4: Add risk-weighted routing to planning
Start small:
- pick one region or top five lanes
- add a “risk note” to dispatch (construction corridor, recurring congestion, weather window)
- adjust start times or route choice when it doesn’t break service
Run the loop weekly and track whether leading indicators drop. If they do, crash frequency tends to follow.
Where FMCSA transparency should go next (and how AI fills the gap today)
Answer first: Public safety scoring should emphasize outcome-based metrics like preventability-adjusted crash rates, but fleets and brokers don’t need to wait for policy changes to act.
The RSS analysis argues for better integration of objective crash outcomes into public-facing safety signals—especially preventability-reviewed crashes. I’m on board. If the industry wants fewer serious incidents, we should measure what we’re trying to prevent.
Until that happens, the best operators are building their own “crash prevention stack”:
- near-miss detection (cameras + telematics)
- predictive coaching prioritization (AI risk scoring)
- lane risk management (risk-weighted routing)
- maintenance prediction (fault + behavior correlations)
If you’re running a midsize fleet, this is the moment to stop treating safety analytics as a report and start treating it as an operating system.
You don’t need more data. You need faster, better decisions from the data you already have.
What’s the one lane, terminal, or driver cohort you’d change first if you could see your crash risk 30 days ahead—clearly and in one dashboard?