A practical read on the 40 CX leaders shaping 2026—and what their ideas mean for AI contact centers, automation, agent assist, and measurable ROI.

40 CX Leaders to Follow for AI Contact Center Strategy
December is when a lot of CX teams quietly rewrite their 2026 plans. Budgets get finalized, headcount assumptions harden, and suddenly “we should add AI” turns into a real question: Where exactly does AI belong in our customer service and contact center operating model?
CX Network’s “Top 40 future of CX leaders to follow in 2026” isn’t just a list of smart people. Read through the themes in their work and you’ll see a clear direction: customer service is shifting from reactive support to predictive, semi-autonomous service—without losing the human center. That tension (automation vs. trust) is the make-or-break issue for contact centers in 2026.
This post pulls the most useful patterns from that leader list and translates them into practical decisions you can make in your AI in Customer Service & Contact Centers roadmap—what to automate, what to keep human, and what to measure so you can prove ROI.
The 2026 shift: from “faster replies” to autonomous service
AI in customer service is moving up the stack—from answering questions to preventing them. Multiple leaders in the list point to a future where service becomes “invisible”: issues get resolved before customers even notice.
Steven Van Belleghem describes the trajectory clearly: reactive → predictive → “self-driving.” In contact center terms, that means you’re not only improving handle time or deflection. You’re redesigning the service layer so the customer doesn’t need to initiate contact in the first place.
What autonomous CX looks like in a contact center
Autonomous service doesn’t mean “bots everywhere.” It means systems that detect, decide, and resolve within guardrails. Examples most organizations can implement in 2026:
- Proactive incident outreach: when an outage hits, customers get a clear message, timeline, and next best action—before they call.
- Predictive billing support: flags likely confusion (plan change, proration, late fee) and triggers an explanation + one-tap resolution.
- Delivery exception handling: identifies delays early and offers alternatives (refund, reroute, reschedule) without agent intervention.
If you’re thinking, “That sounds like product and ops, not contact center,” you’re right. That’s the point. AI forces customer service to become cross-functional again—because prevention lives upstream.
Human + AI is the only sustainable operating model
The strongest thread across these leaders is a refusal to treat AI as a replacement plan.
Jay Baer emphasizes “human-first” CX in a digital world. Maranda Dziekonski frames it as AI providing speed and clarity while people bring empathy and common sense. Shawn Nason is blunt: keep “human in the loop” for the foreseeable future.
Here’s my take: contact centers that chase automation as a headcount strategy will see short-term savings and long-term churn. Customers don’t hate automation. They hate being trapped in it.
A practical way to split work between AI and agents
A clean division that works in real operations:
Let AI handle
- Repetitive, well-defined intents (order status, password resets, address changes)
- Information retrieval across messy systems (policy details, eligibility, plan rules)
- After-call work drafts (summaries, disposition suggestions, follow-up emails)
- Routing and prioritization (intent, sentiment, value, urgency)
Keep humans for
- Emotional stakes (bereavement, fraud, cancellations, complaints)
- High-risk decisions (credit, claims exceptions, regulatory disclosures)
- Ambiguous problem solving (multi-issue cases, “nothing works” escalations)
- Relationship moments (retention, save offers, loyalty recognition)
Netflix leader Saki Takeda’s stance is the one most contact centers should adopt: automate routine work so agents can spend time on quality and empathy. Not the other way around.
The hidden win: employee experience (EX)
Tiffani Bova argues CX and EX must be merged. In contact centers, that isn’t theory—it’s math.
If you add AI but don’t redesign agent workflows, you get:
- More complex calls reaching agents (because easy ones are automated)
- Higher cognitive load (agents inherit the hardest edge cases)
- Lower morale (agents become “cleanup crew” for broken automation)
A better approach is to treat AI as an agent experience platform first: knowledge, summaries, coaching, next-best-action, and fewer toggles. When agent effort drops, customer outcomes rise.
AI in CX isn’t a chatbot project (and most companies get this wrong)
Matt Watkinson warns against starting with tech—especially “the dreaded chatbots”—instead of starting with value. That’s the most common failure pattern in AI customer support.
Here’s what works:
Start with friction, not channels
Pick 3–5 problems that create measurable pain. Examples:
- Repeat contacts within 7 days (same issue reopened)
- Transfers (customer bounced between teams)
- Long after-call work (agents spending minutes documenting)
- Low containment with high customer effort (self-service that fails)
- Backlog in email or messaging (slow response drives churn)
Then decide whether the solution is:
- Better self-service and knowledge
- Better routing and triage
- Better agent assist
- Better proactive service
- Or a process fix (yes, sometimes it’s not AI)
Aurelia Pollet’s “CX with substance” viewpoint matters here: AI should fix real friction and prove ROI. If you can’t tie an AI initiative to a specific operational metric, it’s not ready.
The emerging playbook: predictive CX + real-time orchestration
Several leaders point to prediction and orchestration as the next frontier.
Kristi Faltorusso talks about anticipating risk and behavior before issues happen. Faran Niaz calls out “predictive and autonomous CX” powered by real-time data plus human-centered design. Bill Staikos highlights two ideas that are about to show up in more enterprise roadmaps: synthetic data and AI-orchestrated experiences.
What “AI-orchestrated experiences” means in plain language
It means the customer doesn’t experience your org chart.
Instead of:
- Customer explains the problem
- Gets authenticated
- Gets transferred
- Repeats the story
- Waits for another team
You build an orchestration layer that:
- Understands intent
- Pulls context from CRM, billing, order systems
- Chooses the best resolution path
- Applies policies consistently
- Escalates with a full summary when needed
This is where AI agents in the contact center become real value—when they coordinate tasks across systems, not just chat.
Why synthetic data is suddenly relevant
If you’re in a regulated industry (insurance, banking, healthcare, telecom), using customer interaction data for model training and testing can create privacy and compliance headaches.
Synthetic data offers a pragmatic path:
- Test conversation flows without exposing PII
- Simulate edge cases (rare but critical scenarios)
- Stress-test routing, QA scoring, and automation logic
Used responsibly, it can shorten the time from pilot to production—without risky shortcuts.
What to measure in 2026: proving AI ROI without fooling yourself
AI programs fail when teams measure the wrong things. Deflection and containment can look great while churn quietly rises.
A more honest scorecard for AI in customer service:
The “trust + efficiency” metric set
- Customer Effort Score (CES) by channel and intent (self-service must reduce effort)
- First Contact Resolution (FCR) for intents touched by AI
- Repeat contact rate within 7/14 days
- Transfer rate and “ping-pong” transfers
- Time to resolution, not just average handle time
- Agent utilization of AI assist (adoption is a leading indicator)
- QA outcomes: compliance, empathy behaviors, accuracy
- Containment with satisfaction (containment without CSAT isn’t a win)
Conny Kalcher’s empathy-first stance (including structured empathy training) is a useful reminder: your AI program is only as good as the experience it produces under stress. Measure how customers feel when things go wrong, not just when everything’s fine.
How to use these 40 leaders without turning it into “thought leader theater”
Following smart people is easy. Turning their ideas into action is the hard part.
Here’s a simple way to operationalize the list in your contact center and CX planning cycle:
- Pick 5 leaders with different lenses: operations, design, empathy/culture, data/AI, and industry-specific expertise.
- Run a quarterly “trend-to-decision” session: each leader’s big idea must map to a decision you can make (or reject) in 60 minutes.
- Translate trends into experiments: one hypothesis, one metric, one owner, 30–60 days.
- Decide what you won’t do: saying no is how you avoid random AI pilots.
If you only do one thing: treat AI as a service operating model change, not a tooling upgrade. That’s the dividing line between contact centers that improve and contact centers that get noisier.
Where this goes next for AI in Customer Service & Contact Centers
The leaders in CX Network’s 2026 list are aligned on a core truth: AI will increasingly run the “boring middle” of customer service—while humans become more critical in the moments that define trust. The winners won’t be the teams with the most automation. They’ll be the teams that make automation feel respectful, fast, and easy to escape.
If you’re planning your 2026 roadmap now, focus on three moves: proactive prevention, agent experience upgrades, and orchestration across systems. That’s where ROI shows up, and it’s where customers actually notice the difference.
What’s the next decision on your roadmap: Which customer problem should your AI solve before the customer ever contacts you?