Use ITI Asia insights to ship AI in underwriting, claims, fraud, and service—plus a practical conference playbook and the TDI member discount code.

ITI Asia: The AI Insurance Agenda You Should Copy
A 30% discount code sounds like a small thing. But for insurance leaders trying to turn AI in insurance from a slide deck into measurable outcomes, it’s usually the moment you finally get approval to be in the room where the real conversations happen.
That’s why the ITI Asia event (Insurtech Insights Asia) matters well beyond “networking.” When an event attracts 5,000+ insurance professionals and C-level executives and features 300+ speakers, you’re not going to get a single “answer.” You’re going to see patterns—what’s working, what’s stalled, and what people are quietly rebuilding after discovering their first AI pilot didn’t survive contact with operations.
This post is part of our AI in Insurance series, and it’s written for one goal: help you attend (or debrief) major industry conferences like ITI Asia with a practical plan—so you come back with a shortlist of AI use cases, vendors, and operating model decisions you can actually execute.
The fastest way to fall behind in insurance AI isn’t lack of ambition. It’s spending 6 months testing models that never ship.
Why ITI Asia-style conferences matter for AI in insurance
Conferences matter because insurance AI is now an execution race, not an awareness race. Most carriers already know they “should be using GenAI.” The difference in 2026 results will come down to who:
- Industrialized claims automation instead of running isolated pilots
- Built governance that doesn’t smother delivery
- Fixed data access so underwriting and servicing teams can trust outputs
- Put humans in the loop in the right places (and removed them from the wrong ones)
At an event like ITI Asia, you can pressure-test your approach against peers in two days. You’ll hear how leaders are solving uncomfortable problems—like AI output auditability, model risk controls, and what happens when the business wants faster cycle times but compliance wants slower release trains.
The realistic upside: speed to decision, not “inspiration”
Most companies get this wrong. They send a team to “learn about AI,” come back with 40 pages of notes, and nothing changes.
The real ROI is different: you use the conference to reduce uncertainty on 3–5 key decisions you’re currently stuck on, such as:
- Should we implement GenAI in the contact center or claims first?
- Do we buy a claims AI platform, partner with a specialist, or build internally?
- What governance controls are “minimum viable” so we can ship safely?
If you leave with answers to those, the trip pays for itself.
The 5 AI themes you should listen for (and what to do with them)
With 300+ speakers, your job is to filter. Here are five themes that consistently separate high-performing AI programs from the ones that stay in PowerPoint.
1) Underwriting AI that improves decision quality (not just speed)
The point of AI underwriting isn’t to automate every decision. It’s to improve risk selection and pricing consistency while reducing turnaround time where it actually matters.
What to listen for in sessions and side conversations:
- How carriers are combining predictive models with GenAI (e.g., GenAI summarizes submissions; predictive models score risk)
- Whether “straight-through processing” is being used responsibly (clear appetite + clean data) or as a vanity metric
- How underwriters override AI recommendations—and whether those overrides are tracked and learned from
What to do when you get back:
- Pick one high-volume segment (SME property, motor, simple life) and define three decisions the model can support (triage, missing info, referral rationale).
- Measure underwriting AI with two metrics: cycle time and loss ratio stability. If you only track cycle time, you’ll optimize for the wrong thing.
2) Claims automation that customers actually feel
AI in claims wins when it reduces customer effort. Faster settlement is great, but fewer touchpoints is better.
Listen for:
- Claims triage approaches (severity prediction, litigation propensity, fraud signals)
- Document ingestion patterns: FNOL notes, medical bills, repair invoices, photos
- How teams handle “gray zone” claims where policy interpretation needs explanation
A practical stance: if your claims AI initiative can’t produce a clearer customer message, it’s not finished.
What to do next:
- Identify 2–3 claim types that are both high volume and document-heavy.
- Build an “AI claims workbench” concept: summarize, extract, recommend next-best action, and generate customer updates—with adjuster approval.
3) Fraud detection that goes beyond flags
Fraud isn’t just detection; it’s workflow. Many carriers can flag suspicious claims. Fewer can:
- Prioritize cases by expected value
- Route them to the right investigator
- Capture outcomes as feedback loops to improve future detection
Listen for:
- How fraud teams are combining network analytics with GenAI narrative analysis
- Whether they’ve reduced false positives (the hidden tax on SIU teams)
- How they handle explainability and audit trails
What to do next:
- Make “time to investigative decision” a core KPI.
- Require every fraud model output to include: top drivers, confidence, and recommended action.
4) Customer engagement AI that doesn’t create compliance risk
Everyone wants AI chatbots and AI customer service. The firms doing it safely are treating GenAI as a co-pilot, not an autopilot.
Listen for:
- Guardrails: retrieval from approved knowledge bases, response constraints, refusal patterns
- Human-in-the-loop designs (especially for coverage questions)
- Metrics beyond containment rate: complaint rate, recontact rate, and quality monitoring outcomes
My take: you should be suspicious of any demo where GenAI answers coverage questions freely from the open internet.
What to do next:
- Start with internal enablement: GenAI for agents and service reps to draft responses with citations from approved policy and SOP content.
- Only then graduate to customer-facing experiences.
5) The operating model: governance that enables delivery
The event content will be full of tooling and vendors. The harder (and more valuable) conversations are about how work gets done.
Listen for:
- Who owns AI products (IT, data, business, or a hybrid model)
- How model risk management is integrated into agile delivery
- How privacy, security, and compliance approvals are handled without 10-week delays
What to do next:
- Set up a lightweight AI governance rhythm:
- Weekly risk triage for new AI use cases
- Standard control checklist (data access, bias testing, logging, human review)
- Release gates tied to risk level
A “two-day conference” playbook for AI leaders (so you don’t waste it)
If you’re attending a large event, don’t treat it like a buffet. Treat it like a sprint.
Before you go: define your 3 outcomes
Write down three outcomes you want by the end of day two:
- One use case you can launch in 90 days
- One vendor shortlist (even if you decide to build)
- One operating model decision you’ll stop debating internally
Then build your schedule around those outcomes, not around who has the biggest brand on stage.
During the event: ask better questions
Here are questions that force real answers:
- “What’s the failure mode you hit first—data, change management, or compliance?”
- “What’s your human-in-the-loop design, and where did you remove humans entirely?”
- “What’s the one control you won’t compromise on?”
- “How did you measure impact—cycle time, leakage, NPS, expense ratio?”
- “How long did procurement and InfoSec take compared to model development?”
You’re looking for the operational truth, not the marketing version.
After the event: convert notes into a decision memo
Within 72 hours, produce a one-page memo with:
- Use case chosen + why now
- Data required + where it lives
- Risks + controls
- Build/partner recommendation
- 90-day plan + owner
If you can’t write that memo, you didn’t actually learn anything usable.
What ITI Asia signals about 2026 priorities for insurers
Even though the original announcement is about ITI Asia and a member discount code, the scale of the event reflects where the market is going:
- AI is moving from innovation teams to core P&L owners. Underwriting and claims leaders are now expected to ship.
- GenAI is becoming a layer across operations, especially for summarization, document processing, and agent assistance.
- Risk, compliance, and model governance are product features. If you bolt them on later, you’ll slow down later.
If your 2026 plan still frames AI as “experiments,” you’re already behind your peers.
Practical details (and how to use the member discount)
Insurtech Insights Asia is a major insurance conference that brought together thousands of professionals and hundreds of speakers. The Digital Insurer (TDI) shared that TDI members are eligible for a 30% discount using the promo code TDIASIA30 during registration.
If you’re deciding whether a conference like this is worth it, use a simple rule: attend when you have active AI work in flight and real decisions to make. That’s when the conversations turn from interesting to profitable.
Your next move: turn the event into an AI execution advantage
The strongest AI insurance teams I’ve seen do one thing consistently: they treat every external event—conferences, roundtables, vendor briefings—as input to a disciplined delivery machine.
Pick one AI initiative (underwriting triage, claims automation, fraud workflow, or customer service co-pilot). Decide how you’ll measure it. Put a 90-day clock on it. Then use what you learn at events like ITI Asia to remove blockers faster than your competitors.
If you could only bring back one thing from a conference, make it this: a concrete AI use case with an owner, a metric, and a go-live date. What would yours be?