Trump’s AI Order: What Insurers Must Do Next

AI in Government & Public Sector••By 3L3C

Trump’s AI executive order targets state rules. Here’s what it means for AI in insurance in 2026—and how to stay compliant while scaling AI.

AI regulationAI governanceinsurance complianceclaims automationunderwriting analyticsfraud detectionpublic sector policy
Share:

Featured image for Trump’s AI Order: What Insurers Must Do Next

Trump’s AI Order: What Insurers Must Do Next

A single executive order can change how fast your AI roadmap moves—or how fast it gets litigated into limbo.

On December 18, 2025, President Donald Trump signed an executive order aimed at curtailing state-level AI regulation by directing federal agencies to identify “burdensome” state AI rules and discourage new ones through funding pressure and legal challenges. If you’re building or buying AI for underwriting, claims automation, or fraud detection, the biggest impact isn’t political theater. It’s operational uncertainty.

For insurers, this is a pivotal moment: AI adoption is accelerating, but governance expectations are splintering across states, regulators, and courts. In this post—part of our AI in Government & Public Sector series—I’ll translate what the order signals for 2026, what’s likely to happen next, and how insurance leaders can protect growth without sleepwalking into compliance or reputational risk.

What the executive order actually changes (and what it doesn’t)

Answer first: The order doesn’t instantly erase state AI laws. It attempts to discourage state regulation and set the stage for a federal framework that could override parts of it later.

The source article notes that states like Colorado, California, Utah, and Texas have already passed cross-sector AI laws (privacy limits, transparency expectations, and risk-focused provisions). Those laws exist because AI systems already influence high-stakes decisions—employment, lending, and healthcare—and research has repeatedly shown bias and error risks.

The order directs federal agencies to do two practical things:

  1. Identify and label state AI rules as “burdensome.”
  2. Pressure states not to enact or enforce those rules by:
    • threatening to withhold certain federal funds (the article mentions broadband as an example), and/or
    • challenging state laws in court.

It also signals an intent to pursue a lighter-touch national framework. That sounds attractive to businesses operating nationally (including insurers), but “lighter-touch” can also mean “less specific,” which rarely helps a regulated industry make go/no-go deployment decisions.

The important nuance for insurance teams

Even if state AI rules are chilled, insurance is still regulated heavily at the state level. You don’t get to stop caring about state expectations just because AI regulation is in dispute. Departments of insurance can still scrutinize:

  • unfair discrimination
  • rating and underwriting justification
  • adverse action explanations
  • third-party model governance
  • consumer complaint patterns

So the order may reduce one category of state obligations while leaving the core insurance compliance reality intact.

Why state AI regulation matters to insurance in the first place

Answer first: Because AI systems in insurance often make or influence decisions that regulators consider “material”—pricing, eligibility, claim outcomes, and fraud flags.

A lot of AI governance talk gets stuck on abstract ideas like “ethics.” Insurers don’t live in abstracts. They live in workflows:

  • An underwriting model recommends a surcharge.
  • A claims tool auto-routes a claim to fast-track or SIU.
  • A fraud model increases scrutiny on certain transactions.
  • A conversational AI tool gives coverage guidance and may be treated like a digital producer.

When states regulate AI transparency and discrimination risk, they’re effectively regulating how you prove the model is fair, explainable, and controllable.

Where the risk shows up (real-world examples)

Here are four places I’ve seen insurers underestimate AI risk—especially when regulatory direction is unclear:

  1. Proxy discrimination in underwriting

    • Even if your model excludes protected classes, it can infer them from correlated variables (location, shopping signals, device data, credit attributes). States targeting algorithmic discrimination will ask how you tested for disparate impact.
  2. Claims triage that becomes claims denial-by-default

    • Automation that “just routes work” can still create systematic delays or heightened scrutiny for certain groups. Regulators and plaintiff attorneys look for patterns.
  3. Fraud detection that over-flags honest customers

    • False positives are not just a CX problem—they become a complaint and market conduct problem.
  4. Vendor opacity

    • If a third-party won’t disclose training data sources, performance by segment, or change-management controls, you inherit the governance gap.

State AI laws are often trying to force a minimum bar around these realities: data minimization, transparency, and accountability.

The 2026 reality: “regulatory clarity” may arrive through courts, not Congress

Answer first: Expect litigation and uneven enforcement, which means insurers should plan for multiple compliance scenarios rather than betting on a single national rule.

The article flags what’s coming: legal fights. Colorado’s attorney general warned the state may sue if the order is enforced; California lawmakers are already signaling court challenges; other states say they’ll continue pursuing broader regulation.

This matters because executive orders have limits. A court battle that drags through 2026 creates a familiar corporate risk pattern:

  • Product teams keep building.
  • Compliance teams can’t get definitive answers.
  • Procurement signs vendor deals with weak governance provisions.
  • Then an enforcement action or lawsuit forces a rushed retrofit.

A practical way to plan: three scenarios

If you’re an insurance leader setting 2026 priorities, scenario planning beats political guessing.

  1. Scenario A: The order is largely blocked

    • State AI rules continue to expand.
    • Compliance becomes a “most stringent state” approach (similar to privacy playbooks).
  2. Scenario B: The order chills new state AI laws, but doesn’t erase existing ones

    • You still must comply in states with enacted rules.
    • The bigger change is fewer new cross-sector AI laws (though insurance-specific scrutiny remains).
  3. Scenario C: A federal framework emerges that preempts key parts of state AI regulation

    • Good: simpler compliance map.
    • Bad: if the framework is vague, insurers may face uncertainty around what “good enough” governance looks like—until the first exams and lawsuits define it.

Most carriers should assume B in the short term and build capabilities that also work under A.

What insurers should do right now (even if the rules change again)

Answer first: Build AI governance that’s defensible under scrutiny: documentation, testing, monitoring, and human accountability.

If you’re trying to generate leads in the AI-in-insurance space, here’s the truth: tools alone won’t de-risk AI adoption. Process is what de-risks it.

1) Create an “AI system inventory” that a regulator would respect

Not a spreadsheet graveyard. A living inventory that answers:

  • What is the model’s purpose and decision impact?
  • What data sources does it use (including third-party and derived features)?
  • Who owns it (business + technical + compliance)?
  • Where is it deployed (states, lines, channels)?
  • What’s the fallback if it fails?

If you can’t list your AI systems, you can’t govern them.

2) Standardize model documentation (the boring part that saves you)

A strong documentation packet typically includes:

  • training data summary and refresh cadence
  • feature rationale (why each signal is there)
  • explainability approach appropriate to use case
  • performance metrics (accuracy and error costs)
  • bias/disparate impact testing approach
  • change log and approval trail

This is also where you protect yourself when a vendor says, “Trust us.” Don’t.

3) Treat “human in the loop” as a control, not a slogan

If an adjuster can override an AI recommendation, regulators will ask:

  • Are overrides tracked?
  • Are supervisors reviewing override patterns?
  • Are employees trained on when to distrust the model?
  • Does the model’s recommendation create pressure to conform?

A human button isn’t governance. A monitored override workflow is.

4) Prepare for transparency requests before they arrive

Whether the request comes from a state AI rule, a DOI market conduct exam, or a plaintiff’s discovery request, you should be able to produce:

  • why a decision was made (at the right level of detail)
  • what factors were used and excluded
  • how consumers can appeal or correct data

A practical stance I like: “Explain to the customer, document for the regulator.”

5) Update vendor contracts to match the new risk environment

If this executive order triggers a “race to ship,” vendors may push faster deployments with thinner governance. Push back contractually:

  • audit rights (or at least detailed attestations)
  • incident notification timelines
  • model change disclosures
  • data provenance and permitted use
  • indemnity aligned to AI harms (where feasible)

Insurers have leverage here. Use it.

The government angle: AI policy is now an insurance operating variable

Answer first: Federal-state tension on AI regulation will influence insurance operations the same way privacy policy did—by shaping what’s feasible, provable, and scalable.

This post sits in the AI in Government & Public Sector series for a reason. When public-sector policy shifts, private-sector AI programs don’t just “comply.” They redesign:

  • Product design: what data you’re willing to use
  • Distribution: how you explain AI-assisted decisions to customers and agents
  • Claims operations: what you automate vs. what stays human-reviewed
  • Risk appetite: how aggressively you deploy models that affect vulnerable consumers

If you’re building AI strategy for 2026, you should be tracking policy developments like you track catastrophe models: not because you control them, but because they shape your loss profile.

A useful one-liner for leadership teams: “AI regulation uncertainty is still AI risk—just harder to model.”

What to watch next (so you’re not surprised in Q1 and Q2)

Answer first: Watch for court injunctions, federal agency guidance, and state insurance regulator responses—not just AI headlines.

Over the next 90–180 days, the signals that matter most to insurers are:

  • Early litigation outcomes (temporary restraining orders, injunctions, standing decisions)
  • Federal agency enforcement posture (which agencies act, and how aggressively)
  • State DOI bulletins and exam focus areas (AI and unfair discrimination scrutiny can increase even without new “AI laws”)
  • State legislative behavior (some states will push forward out of defiance; others will pause)

If you run multistate programs, assume a mixed map. That’s not pessimism—it’s just the U.S. regulatory model.

Next steps: Build AI programs that survive both speed and scrutiny

The executive order is being framed as an innovation play. For insurance, the smarter framing is resilience: build AI that keeps working even if the rules swing again.

If you’re deploying AI in underwriting, claims automation, or fraud detection and want a clear plan that won’t collapse under a market conduct exam, start with two artifacts: a defensible AI inventory and a repeatable governance package for every model. Those two alone prevent most “we didn’t realize” failures.

Which part of your AI program is most exposed right now: data sources, vendor opacity, or decision explainability—and what would it take to fix it before 2026 ramps up?