BPCE Assurances’ “Anna” shows how GenAI improves insurance customer service by speeding answers, reducing escalations, and strengthening compliance.

GenAI for Insurance Service: Lessons from BPCE’s Anna
A customer calls with a simple question: “Am I covered if my phone was stolen abroad?” The real work isn’t answering—it’s finding the answer fast, confirming it’s compliant, and explaining it in plain language. In most insurers, that means hopping between knowledge bases, policy admin systems, PDFs, procedure notes, and tribal knowledge in someone’s head.
BPCE Assurances’ “Anna” (built with Zelros) is a practical counterexample: an AI assistant designed around the daily reality of customer relationship managers and back-office teams. The goal wasn’t to show off a new model. It was to reduce search time, raise answer quality, speed up onboarding, and make the workday less exhausting.
This post is part of our AI in Insurance series, where we look at what actually works in underwriting, claims automation, fraud detection, and customer service. BPCE’s approach is a strong blueprint for any insurer trying to turn generative AI into measurable operational value—without compromising security or compliance.
The real bottleneck in insurance customer service isn’t empathy—it’s retrieval
Insurance service teams don’t struggle because they don’t care. They struggle because the information they need is fragmented. And that fragmentation creates three predictable problems: slow responses, inconsistent answers, and escalating “easy” questions to experts.
BPCE Assurances faced a familiar set of constraints:
- Complex coverage questions that vary by product, rider, date, and customer profile
- Answers distributed across multiple systems (knowledge bases, contract tools, internal procedures)
- Regulatory pressure that makes “close enough” answers risky
- Onboarding pain where new hires can’t build product confidence quickly
Anna’s promise is simple: put relevant, policy-specific knowledge in front of advisors instantly, so they can focus on judgment, communication, and customer outcomes.
Why this matters in December 2025
Year-end and early-year periods tend to spike operational load: policy renewals, updates to terms, customer address changes, new product pushes, and claim-related questions following winter travel and weather events. If your teams are already stretched, “search-and-stitch” workflows fall apart fast.
A GenAI assistant won’t fix broken processes by itself. But when it’s built as a secure knowledge and workflow layer, it can remove the most stubborn friction: time wasted locating the right source and translating it into a usable response.
What BPCE built with Zelros: from classic NLP to “Anna NextGen” with LLMs
BPCE Assurances and Zelros collaborated for seven years, developing “Anna” to support employees in the Customer Expertise and Relationship Center (CERC) and back offices for Banque Populaire and Caisse d’Epargne networks.
The trajectory is telling:
- Anna began with state-of-the-art NLP focused on understanding employee questions.
- In November 2023, BPCE launched “Anna NextGen”, integrating a newer OpenAI model hosted on secure Azure.
- The experience was designed for continuity—employees didn’t have to relearn everything to benefit.
That last point is underappreciated. Most companies get AI adoption wrong because they treat it like a technology rollout, not a behavior change program.
“What impresses me is the smooth transition between the old and the new Anna, with no major disruptions.”
— Nofel Goulli, Deputy CEO – BPCE Assurances/BPCE Vie
“Answer first” is the product—and it has to be grounded
LLMs shine when they:
- Interpret messy, real-world questions (“My situation is complicated…”)
- Produce a coherent response quickly
But insurers require something extra: grounding in authoritative internal content. Anna NextGen is positioned as generating responses from BPCE Assurances’ own data and documents in a controlled environment—exactly the pattern insurers should copy.
If you’re evaluating AI in insurance customer service, the question isn’t “Can it chat?” It’s:
- Can it cite and align with your policy wording and procedures?
- Can you control what content it uses?
- Can you audit outputs and improve them over time?
The employee experience benefits are operational—and they show up in metrics
Employee experience improvements aren’t “soft.” In insurance operations, they translate directly into handle time, training cost, and service consistency. BPCE’s summary of benefits maps neatly to measurable outcomes.
For customer relationship managers: less hunting, more resolving
Anna provides instant access to thousands of documents, procedures, and customer information, with the ability to refine queries and receive more contextualized answers.
That enables:
- Lower average handle time (AHT) by reducing search and verification steps
- Higher first-contact resolution (FCR) because the advisor can answer without escalating
- Better consistency across advisors, shifts, and regions
- Higher confidence for newer employees, which lowers ramp time and churn risk
A practical way to quantify this in your own operation:
- Measure the baseline: % of interactions that require searching >2 systems
- Track the delta: time-to-first-relevant-document and time-to-answer
- Watch expert escalations: volume, reason, and resolution time
If AI doesn’t reduce at least one of these, it’s entertainment—not transformation.
For experts: stop being a helpdesk for routine questions
BPCE highlights that Anna frees experts to focus on genuinely complex cases. That’s a crucial organizational design win.
When experts are constantly interrupted to answer repeat questions, you get:
- slower resolution for high-severity cases
- lower satisfaction among your most expensive talent
- knowledge that never scales
A well-designed AI assistant becomes a first line of knowledge, not a replacement for experts. Experts still own edge cases and policy interpretation—but they’re no longer the default search engine.
For solution admins: no-code tuning is a hidden superpower
BPCE notes a user-friendly, no-code interface for real-time adjustments, plus document processing and classification.
In practice, that means:
- faster updates when procedures change
- fewer tickets back to IT
- more “ops-owned” iteration (the people closest to the work can improve the tool)
This is where many GenAI pilots stall: the model works, but every improvement requires a sprint. No-code administration changes the pace of learning.
“From the very start of the project, we identified and anticipated needs, leading to significant improvements… even before our Anna ambassadors conducted the first tests.”
— Annie Depond, Head of Skills Development & Customer Relations
What insurers should copy (and what they should avoid)
BPCE’s case is valuable because it’s not a one-off chatbot—it’s an operating model. Here are the patterns worth replicating.
Copy this: start with a high-frequency, high-friction workflow
Customer service and back office are perfect proving grounds for AI in insurance because:
- the volume is large
- the questions repeat (with variations)
- the cost of delay is visible in SLAs and NPS
- the knowledge is already written down (just scattered)
A strong first use case looks like:
- many documents, many systems
- clear “right answers” most of the time
- frequent training needs
Copy this: keep the data boundary tight
The RSS summary emphasizes a secure and controlled environment on Azure. That’s not just an IT detail—it’s what makes adoption possible in regulated settings.
If you’re building or buying:
- define what content the assistant can access
- restrict sensitive attributes by role
- log prompts and outputs for auditing
- implement human feedback loops (thumbs up/down plus reasons)
GenAI governance isn’t paperwork. It’s how you keep the tool usable after the first incident.
Avoid this: treating generative AI as a “perfect answer machine”
The biggest operational risk in GenAI for insurance customer service is over-trust. Advisors under pressure will copy/paste.
So design for reality:
- show source snippets or document references inside the assistant
- encourage advisors to confirm coverage limitations and exclusions
- provide “safe response templates” for sensitive topics (claims denials, complaints, legal)
- add escalation triggers (“If X, route to expert”)
If you don’t build guardrails, you’re betting your compliance posture on people reading carefully during peak call volume. That’s not a strategy.
How this connects to the broader “AI in Insurance” stack (claims, fraud, underwriting)
Customer service assistants like Anna are often the easiest starting point—but they create infrastructure you can reuse across the enterprise. Once you’ve learned how to ground AI on internal documents, manage access, and improve outputs, you can extend the same approach.
Claims automation: from “answering” to “guiding”
A claims assistant can:
- explain what documents are needed for a claim type
- pre-fill forms from existing customer data
- triage claims based on completeness and complexity
The operational payoff is fewer back-and-forth messages and faster cycle times.
Fraud detection: better intake leads to better signals
Fraud models are only as good as the input data. A GenAI layer can improve:
- narrative capture (more structured incident descriptions)
- consistency in coding and categorization
- early flagging (“This pattern matches known fraud typologies”)
The win isn’t that GenAI “catches fraud.” It’s that it helps teams collect cleaner, richer data without slowing down service.
Underwriting: faster clarification loops
Underwriting often stalls on missing info. A grounded assistant can:
- generate clear clarification questions
- explain why a data point is needed
- standardize communications with brokers and customers
That reduces rework and improves turnaround time.
A practical rollout plan inspired by BPCE’s approach
If you want BPCE-like results, focus on adoption mechanics as much as model choice.
- Pick one operational lane (e.g., personal insurance inquiries in a service center)
- Inventory your knowledge (procedures, product docs, exclusions, scripts, FAQs)
- Define “safe answers” and escalation points for high-risk topics
- Pilot with ambassadors (power users who give structured feedback)
- Measure outcomes weekly:
- time-to-answer
- escalations to experts
- onboarding ramp time
- customer satisfaction for targeted intents
- Iterate fast using admin-friendly tools (no-code where possible)
- Scale only after stability (quality, compliance, and adoption)
If you can’t measure before/after, you’ll end up arguing about vibes.
What to ask vendors (or your internal team) before you commit
When evaluating an AI platform for insurance workflows, these questions surface the truth quickly:
- Grounding: How does the assistant ensure answers come from approved documents?
- Access control: Can responses differ by role (new hire vs expert, front office vs back office)?
- Auditability: Can we export prompts, outputs, and document versions used?
- Change management: How quickly can we update content when products change?
- Failure modes: What happens when the model is uncertain—does it say so, or bluff?
- Operational ownership: Who can tune the system without engineering support?
The best answers sound operational, not theoretical.
Where GenAI for insurance service is heading next
BPCE frames Anna NextGen as a first tangible outcome, not the finish line. I agree with that framing. The next wave won’t be “better chat.” It’ll be workflow completion: generating an answer and creating the case note, tagging the intent, updating the CRM, and triggering the right next step.
If you’re leading AI in insurance, this is a solid north star: make the employee the hero, and make the customer feel the speed. Everything else—models, prompts, interfaces—should serve that.
What would happen to your service levels if every advisor could reliably find the right policy answer in 15 seconds?