Firefly Digital’s 2024 Americas award signals what’s working in AI insurance: practical automation for underwriting, claims, and service.

AI in Insurance: What Firefly Digital’s Award Signals
The fastest way to spot where AI in insurance is actually working isn’t a press release or a vendor demo. It’s when practitioners vote with their attention—and their standards.
That’s why Firefly Digital being crowned the 2024 Americas InsurTech Innovation Award winner at The Digital Insurer’s regional finals matters. An award doesn’t prove product-market fit on its own. But it does act as a clear market signal: the industry is rallying around practical innovation—especially where automation reduces cycle time, improves risk decisions, and makes the customer experience less painful.
For this week’s entry in our AI in Insurance series, I’m using the Firefly Digital win as a springboard to answer the real question insurance leaders are asking in late 2025: What kind of AI and digital innovation is getting recognized—and what should carriers copy in their own underwriting, claims, and service roadmaps?
Why an InsurTech award is a leading indicator (not a trophy)
Awards can be noise. This one is useful because of how it was decided.
Firefly Digital won at a live regional final (November 7, 2024) after a competitive round with six finalists, each presenting and taking Q&A. The winner was selected by the TDI community and event audience vote. That format favors solutions that are easy to explain, defensible under questioning, and clearly valuable to day-to-day operations.
Here’s what that signals about the market for AI-driven insurance transformation:
- Operational impact beats “cool AI.” Buyers are tired of prototypes that never reach production.
- Time-to-value is a feature. If implementation requires a 12‑month data clean-up project, it’s losing momentum.
- Transparency is becoming a differentiator. In regulated environments, explainability and audit trails are no longer optional.
A sentence I keep coming back to when evaluating AI vendors is this:
If the value can’t be explained in one workflow—quote, bind, endorse, claim, renew—it won’t survive procurement.
What “innovation” means in AI insurance systems (and what it doesn’t)
“Innovation” in insurance gets misused. In 2025, the bar has moved.
Innovation now means measurable improvements to underwriting, claims automation, fraud detection, and customer engagement—without adding new compliance or operational risk.
The three outcomes judges and buyers reward
Even without dissecting any single vendor’s product, most award-winning InsurTech solutions cluster around three outcomes:
- Faster decisioning (reduced quote-to-bind time, faster claims triage)
- Better decisions (improved risk pricing, fewer underwriting errors, smarter fraud flags)
- Lower cost-to-serve (automation in policy servicing, document handling, customer support)
If you’re leading an AI program inside a carrier, this is the helpful reframing:
- Don’t sell “AI.” Sell a shorter cycle time.
- Don’t promise perfect predictions. Promise fewer avoidable touches.
- Don’t aim for full automation first. Aim for consistent, explainable assistance.
What doesn’t count anymore
Teams still waste quarters on these “innovation traps”:
- A chatbot that answers FAQs but can’t authenticate, transact, or hand off cleanly
- A model that improves AUC on paper but can’t be monitored post-launch
- Automation that shifts work to humans via exceptions (the hidden cost)
Most companies get this wrong: they optimize the model and ignore the workflow.
Where AI is paying off most in underwriting, claims, and engagement
The practical value of Firefly Digital’s recognition is that it reinforces where the industry is placing bets. In the Americas, especially, carriers are prioritizing AI programs that handle high-volume operations with regulatory constraints.
AI in underwriting: triage, appetite, and decision support
Underwriting is still a human craft in many lines—but AI underwriting is steadily taking over the early steps:
- Submission intake automation: reading emails, ACORD forms, loss runs, and attachments
- Risk triage: routing “straightforward” submissions to fast lanes
- Appetite matching: identifying which products and programs fit without manual back-and-forth
- Decision support: surfacing similar risks, referral reasons, and pricing guidance
A high-performing underwriting AI initiative usually has one clear metric: reduce touches per submission. That metric ties directly to expense ratio and broker experience.
Claims automation: triage first, settlement second
Most carriers shouldn’t start by trying to fully automate claims settlement. Start with claims triage and document intelligence.
High-ROI patterns I’ve seen repeatedly:
- First notice of loss (FNOL) classification with structured and unstructured inputs
- Coverage guidance that points adjusters to relevant forms and exclusions
- Document extraction (estimates, invoices, medical bills) with validation rules
- Next-best-action prompts for adjusters based on claim type and severity
This matters in December specifically because claims volume often spikes seasonally (weather events, travel, year-end activity), and service backlogs get exposed fast. AI that reduces intake time and routes claims correctly is the difference between “we’re managing” and “we’re overwhelmed.”
Fraud detection: fewer false positives, better referrals
Fraud models fail when they swamp SIU teams with noise. The best AI fraud detection programs:
- Combine multiple signals (claim history, device signals, network links, document anomalies)
- Produce ranked referrals with reasons, not just a score
- Track outcomes so the model learns from investigations
If your SIU team can’t tell you the hit rate of AI-generated referrals, your fraud program is opinion-driven, not data-driven.
Customer engagement: AI that actually reduces churn
In insurance, “engagement” means one thing: customers get answers and actions quickly.
The most valuable AI customer engagement setups:
- Agent and adjuster copilots that draft messages, summarize histories, and suggest next steps
- Self-service that can transact (policy changes, payment plans, claim status updates)
- Proactive outreach based on life events or policy gaps—done carefully, with consent and clear value
When engagement AI works, you see it in:
- Lower call handle time
- Higher first-contact resolution
- Fewer “where’s my claim?” contacts
What to learn from award-winning InsurTechs: a practical evaluation checklist
If you’re comparing vendors—or building internally—use this checklist to stay grounded.
1) Workflow fit: where does it live?
Start by forcing clarity:
- Does this AI sit in email intake, the core system, the CRM, or as a standalone portal?
- What is the one workflow it improves (quote, bind, claim, renew, service)?
- Who is the primary user: underwriter, adjuster, agent, customer, SIU?
If the answer is “everyone,” it’s usually no one.
2) Data readiness: what does it require on day one?
A strong solution should work with what carriers already have:
- PDFs, scanned documents, and inconsistent forms
- Legacy policy and claims systems
- Partial data fields and missing values
Ask for a pilot plan that spells out:
- Minimum viable data
- Expected exception handling
- Human review steps
3) Governance: can you defend the output?
In regulated industries, a model you can’t explain becomes a liability.
Non-negotiables:
- Audit logs (who/what/when changed outcomes)
- Model monitoring (drift, performance, bias checks)
- Human-in-the-loop controls (thresholds, overrides, escalation)
4) Economics: where is the real savings?
Push for a basic unit economics story:
- Cost per submission handled
- Cost per claim processed
- Time saved per adjuster/underwriter per day
A pilot that can’t tie back to those numbers isn’t a pilot—it’s a science project.
A realistic path to adopting AI in insurance (without blowing up operations)
Carriers that succeed with AI don’t attempt a big bang transformation. They sequence it.
Here’s a practical rollout approach that fits most mid-to-large insurers.
Phase 1: Assist and automate the “messy middle”
Target unstructured work first:
- Document intake
- Summarization
- Classification
- Drafting correspondence
This phase creates fast wins and builds trust.
Phase 2: Standardize decisions with guardrails
Next, tighten decisioning:
- Underwriting referrals
- Coverage checks
- Triage rules + model signals
Keep humans accountable for decisions, but make the process consistent.
Phase 3: Expand to proactive engagement and prevention
Only after operations are stable should you push into:
- Proactive retention models
- Risk prevention programs (IoT, telematics, property intelligence)
- Personalization at scale
This is where carriers start differentiating—because the foundation is already in place.
People also ask: what does an InsurTech innovation award mean for carriers?
It means buyers are prioritizing AI that produces measurable operational outcomes. Awards voted on by practitioner communities tend to favor solutions that reduce cycle times, improve decision quality, and integrate cleanly into existing workflows.
Does it mean every carrier should buy InsurTech tools instead of building? No. Many carriers should do both: buy for commodity capabilities (document intelligence, intake, copilots) and build where you have proprietary advantage (pricing, risk selection, distribution strategy).
What’s the biggest risk when adopting AI in insurance? Poor governance. A model that isn’t monitored, auditable, and controllable will eventually create compliance exposure—especially when it influences underwriting or claims outcomes.
Where this leaves AI in insurance heading into 2026
Firefly Digital’s award win is a small story with a big implication: AI in insurance is no longer about proving the technology works—it’s about proving the workflow works. Practitioners are rewarding solutions that survive real Q&A, not just slide decks.
If you’re planning 2026 priorities right now, I’d take a firm stance on this: put your next dollar into the parts of underwriting, claims automation, and customer engagement where humans are drowning in repetitive tasks. Get cycle time down. Make decisions more consistent. Then scale.
If you want a sanity check on your AI roadmap—vendor selection, pilot design, governance, or KPI setup—build a one-page “workflow scorecard” for a single use case and pressure-test it with your operations leaders. The best AI programs are the ones ops teams actually adopt.
What’s one insurance workflow in your organization that still relies on copying data between systems—and why hasn’t it been automated yet?