AI Regulation Shift: What Insurers Should Do Next

AI in Government & Public Sector••By 3L3C

Trump’s AI executive order could curb state rules. Here’s how insurers can stay compliant, scale AI, and reduce underwriting and claims risk.

ai-regulationinsurance-aiunderwritingclaims-automationai-governancepublic-sector-policy
Share:

Featured image for AI Regulation Shift: What Insurers Should Do Next

AI Regulation Shift: What Insurers Should Do Next

A lot of insurance leaders have been waiting for “regulatory clarity” before scaling AI beyond pilots. That wait just got riskier.

On December 18, 2025, President Trump signed an executive order aimed at curtailing state-level AI regulation. The order directs federal agencies to identify “burdensome” state AI rules, apply pressure through federal funding, and advance a national framework that could override state laws. Whether it holds up in court or not, it adds a new kind of uncertainty: not just what the rules are, but who gets to write them.

This matters for the AI in Government & Public Sector series because the public sector isn’t just “a regulator” in the abstract. It’s a buyer of AI, a user of AI, and the referee of AI harms in the real economy. And insurance sits right in the blast radius: underwriting, claims, pricing, fraud, marketing, and customer service are already being shaped by emerging AI governance.

What the executive order changes (and what it doesn’t)

The executive order’s core move is simple: it attempts to reduce state-by-state AI rules by discouraging states from passing or enforcing certain AI regulations, while pushing toward a lighter federal framework.

The practical effect: less patchwork—or a different patchwork

If the order succeeds, insurers could see fewer conflicting AI obligations across states. That sounds appealing if you run national operations. But there’s a catch: policy doesn’t become “clear” just because it becomes “federal.”

Two things can be true at once:

  • A 50-state compliance matrix is expensive.
  • A fast-moving federal approach can still be unpredictable—especially if states sue, courts intervene, and agencies issue shifting guidance.

Insurers should plan for a multi-year transition period, not a clean switch.

What it doesn’t touch (and why insurers should care anyway)

According to reporting on the order, it doesn’t aim to preempt every AI-related statute. Some state laws—such as certain child safety protections, deepfake restrictions, and rules around state government AI procurement—may remain in scope.

For insurance, that nuance matters because AI rules rarely stay neatly in one box:

  • A “consumer privacy” rule can affect what data supports fraud detection.
  • A “transparency” rule can affect what you must disclose about automated claims triage.
  • A “state procurement” rule can set norms that spill into the private market (vendors tend to standardize).

Where state AI rules were headed—and why insurance was always a target

State-level AI governance has been accelerating because AI systems are already involved in high-stakes decisions: jobs, credit, housing, and healthcare. Insurance belongs on that list, even when lawmakers don’t name it explicitly.

Here’s the direct, insurance-specific translation:

  • Underwriting is a consequential decision.
  • Claims settlement is a consequential decision.
  • Pricing and eligibility are consequential decisions.

When states like Colorado, California, Utah, and Texas pass cross-sector AI laws (often paired with privacy and transparency requirements), insurance workflows are part of the real-world enforcement surface.

The real regulatory risk isn’t “AI.” It’s unexplained outcomes.

A sentence worth remembering: Regulators don’t fear models; they fear outcomes they can’t audit.

Many state proposals (and many consumer advocates) focus on:

  • transparency requirements (what’s automated, what data is used)
  • assessments for discriminatory impact
  • constraints on sensitive data collection and inference

Insurance already has a long history with fairness standards and disparate impact debates. AI doesn’t create that scrutiny; it concentrates it.

What this means for AI in underwriting and claims in 2026

The most important operational point is this: a federal push to limit state AI regulation doesn’t reduce your AI risk exposure—it shifts it.

If state guardrails weaken, the next pressure points for insurers tend to become:

  • market conduct exams and unfair trade practice allegations
  • class actions tied to discrimination or deceptive practices
  • vendor liability disputes (“the model did it” won’t work)
  • reputational events tied to denial patterns or opaque pricing

Underwriting: speed is great—until it’s unreviewable

AI-assisted underwriting is attractive because it can:

  • reduce cycle time
  • standardize appetite decisions
  • use more signals than a human can hold in working memory

But the risk is equally straightforward: if you can’t explain why a risk was declined, tiered, or priced a certain way, you will eventually lose an argument you didn’t see coming.

What I’ve found works inside carriers is to treat every underwriting model like it will be questioned by three audiences:

  1. a regulator
  2. a plaintiff’s attorney
  3. your own board during a reputational crisis

If it can survive those, it’s probably ready.

Claims: automation is under a microscope in customer-facing moments

Claims automation creates measurable efficiency, especially in:

  • FNOL intake classification
  • document extraction
  • severity triage
  • subrogation identification
  • fraud scoring

But claims is also where customers experience “algorithmic unfairness” most viscerally.

A practical stance: automate the invisible back office aggressively; be more cautious where the customer feels the consequence.

That doesn’t mean “no AI” in claims decisions. It means:

  • clear escalation paths to humans
  • documented reasons codes
  • audit trails that stand up months later
  • controls to prevent automation from becoming de facto denial

A compliance roadmap that works under either regime

Whether the executive order is enforced, narrowed by courts, or reversed later, insurers still need a governance posture that travels well across jurisdictions.

Here’s a roadmap that tends to hold up regardless of who’s in charge.

1) Build an AI inventory that’s actually usable

Answer first: if you don’t know where AI is used, you can’t manage regulatory risk.

A real inventory includes:

  • use case (e.g., “bodily injury severity triage”)
  • model type (rules, gradient boosting, LLM, vendor black box)
  • data categories used (including inferred attributes)
  • decision impact (advisory vs automated action)
  • owner, vendor, and monitoring cadence

If you’re doing this in spreadsheets, it’s fine—until it isn’t. The goal is traceability, not perfection.

2) Separate “assistive AI” from “decision AI”

Answer first: regulators and litigators care far more when AI directly changes eligibility, price, or payment.

Create two governance lanes:

  • Assistive AI: summarization, drafting, routing, data extraction
  • Decision AI: underwriting eligibility, pricing tiers, claims payment recommendations, fraud flags that trigger adverse action

Decision AI should face stricter testing, documentation, and approvals.

3) Make fairness testing a standard release gate

Answer first: if you don’t test for discrimination, someone else will—after the damage is done.

At minimum, insurers should implement:

  • pre-deployment bias testing on protected classes (where legally permissible)
  • proxy and correlation checks (ZIP code is the classic trap)
  • adverse impact monitoring over time (models drift)
  • challenger models or benchmarks for reasonableness

Even if federal policy softens, plaintiffs and state AGs don’t disappear.

4) Require “reason codes” and audit trails

Answer first: you need to reconstruct a decision months later, using the record you had at the time.

For underwriting/claims models, require:

  • reason codes aligned to business terms (not “feature_12 weight: 0.37”)
  • versioning of models and data
  • retention of input features used in the decision
  • clear human override logging

This is also good operations hygiene. It helps you debug.

5) Fix vendor governance (because “black box” is a business choice)

Answer first: vendor AI doesn’t outsource accountability.

If a vendor can’t provide:

  • model documentation appropriate to impact
  • testing evidence (performance, bias, drift)
  • incident response timelines
  • rights to audit and obtain artifacts

…you’re not buying a product. You’re buying future disputes.

“People also ask” questions insurers are asking right now

Will the executive order eliminate state AI laws immediately?

Not in a clean, instant way. The order signals federal pressure and potential preemption, but court challenges and state resistance can stretch this into a long, messy period. Plan for overlap.

Does lighter regulation mean insurers can move faster with AI?

You can move faster operationally, but your downside risk can grow if you skip controls. Reputational harm, litigation, and market conduct scrutiny still punish opaque or biased outcomes.

What should an insurance AI program prioritize in 2026?

Prioritize high-ROI, low-regret controls:

  1. AI inventory
  2. decision vs assistive classification
  3. fairness testing and drift monitoring
  4. reason codes and audit trails
  5. vendor contract requirements

Those five show discipline to regulators and improve business performance.

The stance I’d take: don’t wait for Washington to settle this

Insurers that postpone AI strategy until the courts decide who can regulate AI are choosing the worst of both worlds: you still carry the operational drag of manual processes, and you still inherit AI risk through vendors, shadow tools, and inconsistent adoption.

A smarter approach is to scale AI where it’s defensible: clear governance, measurable outcomes, documented decisions, and tight vendor controls. If federal rules eventually override some state requirements, you’ll already be operating at a level that’s hard to criticize. If states keep their power, you won’t be scrambling.

If you’re planning your 2026 roadmap right now, here’s the question that will determine whether AI becomes a durable advantage or a recurring crisis: Can you explain your AI-driven decisions in plain language, with evidence, to a skeptical audience?