Gen AI in Insurance: Configure, Don’t Build or Buy

AI in Insurance••By 3L3C

Gen AI in insurance is shifting from build vs. buy to configure. Learn how LLM-first workflows improve underwriting, claims, and service with control.

Gen AI strategyInsurtechLLMOpsClaims automationUnderwritingCustomer service AI
Share:

Featured image for Gen AI in Insurance: Configure, Don’t Build or Buy

Gen AI in Insurance: Configure, Don’t Build or Buy

Most insurers still frame Gen AI adoption as a “build vs. buy” decision. That framing is already costing time.

By late 2025, the real advantage isn’t who trained a model from scratch or who bought the flashiest point solution. It’s who can configure reliable, domain-specific AI workflows across underwriting, claims, and service—without waiting years for a platform rebuild.

I’ve seen teams stall because they treat AI like a one-time procurement event (“pick a vendor”) or a moonshot engineering project (“we’ll build our own model”). The better path is more practical: LLM-first solutions that are configurable, so business teams can iterate quickly, IT can govern safely, and outcomes show up in cycle time, loss leakage, and customer experience.

Why “build vs. buy” breaks down for Gen AI

Answer first: Traditional build-or-buy assumes software is a static product. Gen AI is a capability layer that needs ongoing tuning, guardrails, and process integration.

In insurance, AI touches regulated decisions, customer communications, and financial outcomes. That means you can’t just ship a model and walk away. You need:

  • Process fit (your underwriting appetite, your claims rules, your scripts)
  • Operational controls (audit trails, approvals, escalation paths)
  • Data boundaries (PII, PHI, payment data, jurisdictional retention)
  • Continuous improvement (prompt updates, retrieval updates, policy updates)

The old choice—build everything internally or buy a rigid tool—fails because Gen AI changes quickly and because insurance workflows are messy. The winner is the organization that can change AI behavior safely in weeks, not quarters.

The hidden cost of “build”: time, talent, and maintenance debt

Building internally can be the right move for a narrow set of core differentiators (for example, proprietary pricing signals or unique fraud network analytics). But most carriers underestimate what “build” really means for Gen AI:

  • You’re not just building an app—you’re building LLMOps: evaluation, red teaming, monitoring, and versioning.
  • You’re responsible for model behavior over time: drift, updates, new failure modes.
  • You’re responsible for compliance evidence: why a response was generated, what data was used, and what safeguards were applied.

A lot of internal programs hit a wall after the proof-of-concept. Not because the model doesn’t work—but because the organization can’t operationalize it with the governance required in insurance.

The hidden cost of “buy”: rigidity and poor integration

Buying off-the-shelf point solutions can deliver speed. The problem is what comes next.

If the solution isn’t configurable to your product language, your distribution model, and your appetite for risk, you get one of two outcomes:

  1. Shadow operations (humans rewriting AI outputs, duplicating checks)
  2. Stalled adoption (teams stop using it because it doesn’t match reality)

In practice, “buy” only works long-term when the vendor gives you deep configuration, strong governance controls, and clean integration into your claims and policy administration systems.

The third path: configurable LLM-first insurance workflows

Answer first: The best Gen AI strategy for most insurers is to adopt LLM-first platforms that are configurable, so you can adapt workflows without rebuilding the core technology.

This is where the industry is heading: not custom foundation models trained by each carrier, and not locked-in tools that can’t evolve. Instead, insurers are choosing partners and platforms that let them configure:

  • Prompts and playbooks aligned to policy language and regulations
  • Retrieval over approved internal knowledge (guidelines, endorsements, SOPs)
  • Role-based controls (agent vs. adjuster vs. supervisor)
  • Audit logs and evaluation metrics
  • Integrations into CRM, telephony, document management, and core systems

A practical way to think about it:

Build the guardrails and workflows. Configure the intelligence.

Why configuration beats code for insurance teams

Configurable LLM-first solutions shift the work from “write more code” to “design better decisions.” That’s a good trade for insurance because your real value is in operational expertise:

  • How you triage a claim
  • How you explain coverage
  • How you detect fraud signals early
  • How you keep underwriting consistent across channels

When the platform supports low-code/no-code configuration, you can involve the people who know the work best: claims leaders, underwriting managers, compliance, and QA.

This matters because Gen AI outcomes are rarely limited by model intelligence. They’re limited by process design.

Multimodal AI is raising expectations across claims and service

Modern models can work across text, voice, and images, and that changes what “automation” looks like in insurance.

  • In claims, adjusters want help extracting facts from photos, statements, and repair estimates.
  • In service, contact centers want real-time guidance during calls and accurate after-call summaries.
  • In underwriting, teams want faster intake from submissions, loss runs, and email threads.

Multimodal capability reduces the number of handoffs. It also changes staffing needs: the critical skill becomes operational AI ownership (LLMOps + process governance), not just data science.

Where configurable Gen AI drives results in insurance

Answer first: The highest-ROI Gen AI use cases are the ones that reduce cycle time and rework in underwriting, claims, and customer engagement—while keeping strong governance.

Below are three areas where configurable, LLM-first solutions consistently perform.

Underwriting: faster intake, better consistency

Underwriting teams lose time on document chasing, summarization, and inconsistent application of guidelines. Configurable Gen AI helps by:

  • Creating submission summaries that follow your exact underwriting checklist
  • Extracting key fields from emails and attachments into structured formats
  • Generating risk narratives that are consistent across underwriters
  • Suggesting next-best actions (missing docs, referral triggers, appetite flags)

What to configure (so it actually works):

  • Your underwriting rules as retrieval-backed guidance, not vague prompts
  • Clear “when to escalate” logic (referrals, exceptions, approvals)
  • Standard output templates for different products and states

If you’re measuring success, start with:

  • Quote turnaround time
  • Referral rate and referral resolution time
  • Rework rate (how often submissions bounce back)

Claims: cycle time reduction without sacrificing control

Claims is where AI can create immediate operational lift—but only if it’s designed for real adjuster workflows.

Configurable Gen AI can support:

  • Automated claim file summaries (what happened, what’s missing, next steps)
  • Document and note drafting aligned to your compliance and tone
  • Triage support: categorizing complexity, routing to the right queue
  • Fraud signal triage (flag patterns for SIU review, not final decisions)

A stance I’ll defend: avoid full “hands-off” claims decisions unless you have exceptional controls. Most carriers will get better outcomes by using Gen AI to reduce admin work and improve consistency, while keeping humans accountable for final determinations.

What to configure:

  • Approved language for coverage explanations and denial communications
  • Escalation thresholds (injury indicators, coverage conflicts, high severity)
  • Evidence requirements (photos, police reports, medical bills)

Metrics that show value fast:

  • Average days to close
  • Touches per claim
  • Supplement rate and leakage indicators

Customer engagement: better answers, fewer transfers

Policyholders don’t care that your systems are fragmented. They just want a clear answer.

Gen AI improves customer engagement when it’s integrated into:

  • Digital self-service (policy questions, billing, endorsements)
  • Agent and contact center assist (real-time suggested responses)
  • Back-office workflows (case resolution, follow-ups)

The difference between “helpful” and “risky” comes down to configuration:

  • Use retrieval from approved knowledge bases
  • Add policy-specific constraints (“only answer from these sources”)
  • Provide “I don’t know” and escalation behaviors

In 2025, customers are also more AI-aware. If your AI is vague, overly confident, or inconsistent, trust drops quickly.

A practical decision framework for insurance leaders

Answer first: Choose build, buy, or configure based on where differentiation lives—and where speed and governance matter more.

Here’s the framework I use with insurance teams.

Step 1: Classify the use case

  • Differentiating + proprietary data advantage → consider building (selectively)
  • Standard workflow + clear ROI → configure a specialized LLM-first solution
  • Commodity capability (basic chat, generic summarization) → buy, but keep it governed

Step 2: Demand “insurance-grade” controls

If a platform can’t show these, it’s not ready for production insurance operations:

  • Role-based access and environment separation
  • Audit logs and traceability (what source supported the answer)
  • Evaluation harness (test sets, regression tests for prompts)
  • PII handling and retention controls
  • Human-in-the-loop review options

Step 3: Plan for the first 90 days

Most Gen AI programs fail because they start with a broad vision and no operational plan.

A realistic 90-day path looks like:

  1. Pick one workflow (e.g., FNOL triage, underwriting intake, call summarization)
  2. Define baseline metrics (cycle time, touches, QA scores)
  3. Configure prompts + retrieval + guardrails
  4. Run a controlled pilot with QA review
  5. Expand to the next adjacent workflow

If you can’t explain how you’ll measure outcomes in 90 days, you’re not buying a solution—you’re buying a science project.

What this means for the “AI in Insurance” roadmap

Answer first: In the AI in Insurance series, the pattern is clear—real adoption comes from workflow ownership and governance, not model obsession.

Fraud detection, risk pricing, claims automation, and customer engagement all benefit from Gen AI, but they benefit most when you treat AI as a configurable layer tied to operational accountability.

By 2026, the competitive gap will look less like “who has AI” and more like:

  • Who can update underwriting guidance in days
  • Who can reduce claims handling time without increasing leakage
  • Who can standardize customer communications across channels

The next step is straightforward: identify one underwriting, claims, or service workflow where your team is drowning in manual effort—and configure an LLM-first approach with tight controls.

If you’re evaluating options right now, ask a blunt question internally: Are we trying to win by writing more code, or by shipping better insurance workflows faster?