IAG’s AI-driven ingestion shows how to cut underwriting admin and speed risk decisions. Learn the controls and playbook fintech teams can copy.

AI Data Ingestion for Underwriting: IAG’s Playbook
Property underwriting has a dirty secret: the hardest part often isn’t pricing risk—it’s getting the risk data into the system. At IAG’s intermediated brands (CGU and WFI), underwriters were reportedly touching seven different systems and spending up to half a day just ingesting partner-provided information before they could even start shaping an offer.
That’s not an “admin problem.” It’s a growth problem. It slows down quote turnaround, frustrates brokers, and quietly caps how many policies a team can write—especially in peak periods when new business and renewals stack up.
IAG’s push to rework high-volume data ingestion with automation and AI is a useful case study for anyone building in AI in finance and fintech. The lesson is simple: if your data intake is messy, your models, decisions, and customer experiences will be messy too.
The real bottleneck in underwriting isn’t risk—it’s data entry
Answer first: In modern insurance, underwriting speed and accuracy are limited by how fast you can turn unstructured documents into structured, decision-ready data.
IAG’s example is painfully familiar across financial services. Risk information arrives as schedules, statements, spreadsheets, PDFs, emails, and “here’s what we have” attachments from partners. In IAG’s property workflow, three key documents can range from a schedule with one or two assets to thousands of locations and asset types—plus a risk schedule and prior history.
When that content is manually re-keyed, three predictable things happen:
- Cycle time blows out. Half a day of intake means the quote itself becomes the easy part.
- Error rates creep in. Manual re-entry is a silent source of downstream rework, leakage, and disputes.
- Your best people do the worst work. Underwriters are hired to understand risk, not to reconcile column names and retype addresses.
IAG framed the fix as a “commercial enablement” effort—reducing administrative burden and manual controls—because that’s what it is. Faster ingestion equals more underwriting capacity without hiring.
What IAG’s target accuracy (98%) tells you about production AI
Answer first: For underwriting ingestion, “pretty good” AI isn’t good enough—you need near-operational accuracy plus controls.
IAG set a goal of extracting data from partner documents and ingesting it into underwriting tools with around 98% accuracy. That number is doing a lot of work. It signals something many fintech teams underestimate:
- In risk and regulated decisioning, the cost of a wrong field can dwarf the benefit of automation.
- Underwriting data isn’t just “information.” It’s pricing inputs, exclusions, coverage limits, and compliance obligations.
The source article describes an evolution from OCR-led extraction to a pilot incorporating AI and large language models, with Appian involved. Early results reportedly landed around 68% accuracy, then improved to 96–98% within a couple of months of close iteration.
Here’s what’s happening under the hood in most successful ingestion programs:
OCR gets you characters; AI gets you meaning
- OCR reads text.
- AI extraction maps text to fields (asset type, occupancy, construction, address normalization, sums insured, deductibles, prior loss notes).
- LLM assistance can classify document types, interpret inconsistent phrasing, and help route exceptions.
But the production requirement is the same: structured, validated data with traceability.
The “last 2%” is the whole job
Going from 68% to 96% is exciting. Going from 96% to “safe for production” is where teams earn their keep. That last mile usually demands:
- Tight document taxonomy (what are the top templates and edge cases?)
- Field-level confidence scoring
- Human-in-the-loop review for low-confidence fields
- Automated validation rules (format, ranges, cross-field checks)
- Clear audit trails (what was extracted, from where, when, and by what model/version)
If you’re building AI workflows in finance, the model is only half the product. The rest is controls, exception handling, and operational design.
A better way to think about AI: not a helper, a process redesign tool
Answer first: The strongest AI outcomes come when you redesign the workflow around AI, rather than bolting AI onto the old workflow.
One comment from IAG’s executive (paraphrased from the article) is worth sitting with: they initially saw AI as assistive—an underwriter’s helper—then realised it needed to sit “at the heart of process change.”
That shift matters. In financial services, many AI efforts stall because they aim too low:
- “Summarise this document for the analyst.” Nice.
- “Auto-ingest and validate the document so the analyst starts from clean data.” That changes throughput.
Here’s what “AI at the heart of the process” typically means in underwriting ingestion:
- Straight-through processing (STP) for high-confidence, standard documents
- Guided review for partial confidence (only the uncertain fields need attention)
- Exception routing for outliers (new templates, poor scans, missing pages)
When teams do this well, the result isn’t just speed. It’s consistency. And consistency is the foundation for reliable risk pricing and portfolio steering.
Why fintech should care: underwriting ingestion is the same problem as fraud and credit
Answer first: Whether you’re pricing a property risk, scoring a borrower, or stopping fraud, the winners are the firms that ingest data fastest—and trust it.
IAG’s story sits inside insurance, but the pattern maps cleanly to other AI in finance use cases.
Bridge point 1: Data ingestion is credit scoring’s quiet dependency
Credit scoring models depend on clean inputs: income signals, bank transaction categorisation, liabilities, employment history, and identity verification. If your intake layer is brittle, the model becomes a liability.
A practical parallel:
- Insurance: asset schedules + claims history → underwriting system
- Lending: bank statements + payslips + liabilities → decision engine
Both need document understanding, field extraction, and validation.
Bridge point 2: Fraud detection lives or dies on latency and data quality
Fraud teams don’t just need “more data.” They need faster, correct data. If merchant names are messy, addresses aren’t standardised, or attachments aren’t parsed, you end up with:
- more false positives (annoying customers)
- more false negatives (real losses)
- slower investigations (higher cost per case)
The same operational truth shows up in underwriting: slow ingestion equals slow decisions, and slow decisions lose deals.
Bridge point 3: Algorithmic trading taught the market a brutal lesson
Trading has long understood that data pipelines are competitive advantage. Underwriting and lending are learning the same lesson now—just with different constraints (auditability, explainability, and customer fairness).
If your competitors can ingest broker submissions in minutes and you need hours, your loss ratio won’t be the first thing that suffers—your top-of-funnel will.
How to implement AI ingestion safely in regulated environments
Answer first: Treat AI ingestion like a risk system: define controls, measure drift, and design for audit from day one.
If you want the benefits IAG is chasing—more capacity, faster turnaround, better colleague experience—here’s the operational checklist I’ve found works in financial services.
1) Start with the “volume × pain” use case
IAG chose a complex ingestion task and found that proving the hardest case unlocked broader reuse across acquisition and claims. That’s a legitimate strategy, but it only works if you can resource it.
A more reliable selection framework is:
- High document volume
- High manual time per file
- Clear field list (what must be extracted)
- Measurable downstream impact (cycle time, referral rate, rework)
2) Set accuracy targets by field, not by document
“98% accuracy” sounds good, but production systems need field criticality:
- Tier 1: pricing/coverage-critical fields (must be near-perfect, strong validation)
- Tier 2: decision-support fields (can be assisted with review)
- Tier 3: nice-to-have metadata (acceptable to miss occasionally)
This helps you automate responsibly without waiting forever for perfection.
3) Build human-in-the-loop as a feature, not a fallback
Human review isn’t failure. It’s the control layer.
Done well, reviewers see:
- extracted value
- source snippet highlight
- model confidence
- validation warnings
…and can correct in seconds.
4) Operationalise monitoring: drift is guaranteed
Document templates change. Broker behaviour changes. Scanning quality changes. If you don’t monitor:
- extraction accuracy by partner/template
- exception rate trends
- manual touches per submission
…your “AI ingestion” becomes yesterday’s brittle rules engine.
5) Don’t ignore platform integration
IAG also has a longer automation history (including claims platform consolidation). That matters because ingestion isn’t useful unless the data lands where decisions are made.
For most organisations, the real work is integrating ingestion outputs into:
- underwriting workbenches
- rules engines
- pricing services
- CRM and broker portals
- claims platforms
This is where many pilots die: great demo, no operational landing zone.
“People also ask” (and what I tell teams)
Does AI ingestion replace underwriters or analysts?
It replaces re-keying and triage, not judgement. The fastest ROI comes from letting experts spend more time on exceptions, negotiations, and portfolio decisions.
Is OCR enough for insurance and finance workflows?
OCR alone is table stakes. You need document classification, field extraction, validation, and audit trails to make it operational.
What’s a realistic timeline to see value?
If you focus on one use case and have clear templates, teams often see measurable cycle-time reduction in 8–16 weeks. Enterprise-wide rollouts take longer because integration and controls take longer.
What to do next if your intake process is slowing growth
AI data ingestion for underwriting is no longer experimental. The IAG example shows the maturity curve: early accuracy isn’t impressive, iteration matters, and operationalising the workflow is where value shows up.
If you’re working in insurance, banking, or fintech, the practical next step is straightforward: measure how much expert time is being burned on turning documents into data. Put a number on it (hours per week, cost per submission, average turnaround time). That becomes your business case.
This is the direction the broader “AI in Finance and FinTech” series keeps pointing to: better models are great, but better pipelines win quarters. When your ingestion layer is reliable, risk assessment improves, decisions get faster, and customer experiences stop being held hostage by PDFs.
If your organisation could cut submission intake from hours to minutes, what would you do with the extra underwriting or analyst capacity—and how quickly would your customers notice?