Sierra’s $100M ARR signals enterprise demand for AI agents. Learn where contact centers get ROI and how to deploy AI safely with measurable results.

AI Agents in Contact Centers: Lessons from Sierra’s $100M
$100M in ARR in under two years is the kind of number that makes enterprise buyers pay attention—especially when the product category is still being argued about in boardrooms. Bret Taylor’s Sierra hitting that milestone is a loud signal: AI agents aren’t a “someday” bet anymore, they’re a budget line item.
For customer service and contact center leaders, this matters for one reason: demand isn’t rising because AI is interesting. Demand is rising because support teams are drowning—in volume, complexity, channel sprawl, staffing constraints, and customer expectations that keep climbing. AI agents are getting adopted fast because they help teams do more than deflect a few FAQs. When implemented well, they take real work off the queue.
This post is part of our “AI in Customer Service & Contact Centers” series, and we’re using Sierra’s rapid growth as a case study for a broader point: the market is rewarding AI that produces measurable operational outcomes—faster resolution, lower cost per contact, and better customer experience.
Why Sierra’s $100M ARR matters to customer service leaders
The direct takeaway from Sierra’s growth is simple: enterprises are buying AI agents at scale. The second takeaway is more important: they’re buying them because they’ve moved beyond experimentation and found use cases that survive procurement, security review, and front-line scrutiny.
A lot of contact centers spent 2018–2022 buying “chatbots” that mostly:
- answered a thin slice of repetitive questions,
- broke the moment a customer went off-script,
- and dumped people into an agent handoff without context.
That left many leaders skeptical. But the current AI agent wave is different because it’s built on large language models (LLMs) plus better orchestration: tool calling, workflow execution, retrieval over your knowledge base, and tight integration with CRMs and ticketing.
Snippet-worthy reality: A useful AI agent isn’t a chatbot. It’s a worker that can read, decide, and complete steps across your systems—with guardrails.
The “ARR signal” is really an adoption signal
ARR doesn’t just mean revenue. It often implies:
- Renewals are happening (value is real enough to keep paying).
- Deployments expand beyond a pilot team.
- Risk teams are saying yes to data access, integrations, and governance.
In other words, Sierra’s milestone suggests the enterprise market is finding a workable path for AI agents in customer service—one that produces ROI without turning every conversation into a compliance incident.
What’s changed: from scripted bots to AI agents that do work
AI agents are getting traction because they can handle messy support realities: partial information, policy nuance, multi-step resolutions, and customers who don’t explain things cleanly.
Here’s the practical difference.
Chatbots: “Answer this question”
Classic bots are good at:
- order status lookup (if the customer provides the exact identifier)
- store hours
- password reset workflows
They’re fragile when the request becomes multi-intent (“I need to change my address and also dispute the last charge”) or requires judgment.
AI agents: “Resolve this issue”
A well-designed AI agent in a contact center can:
- classify intent and capture missing details in natural language
- retrieve relevant policy and product guidance (RAG)
- execute actions through tools (refund, replacement, appointment reschedule)
- summarize and route when humans are needed
- document the interaction automatically (notes, disposition codes)
The goal isn’t to make customers talk to machines. The goal is to get the work done with less waiting and less agent effort.
The 2025 customer service reality: peaks don’t care about your staffing plan
It’s December 2025. Many support teams are living through the same seasonal pressures every year:
- shipping delays and “where’s my order?” surges
- returns and exchange spikes
- billing and subscription churn requests
- staffing coverage gaps due to PTO
AI agents are attractive right now because they can absorb peak demand without the lag time of hiring, training, and quality calibration. That’s not theoretical. It’s operational relief.
Where AI agents actually pay off in contact centers
The biggest ROI usually comes from shrinking handle time and after-call work just as much as “deflecting” contacts. If you only measure deflection, you’ll undercount the value.
1) Tier-1 resolution without the dead-end experience
AI agents can resolve common issues end-to-end—if you let them take actions, not just respond.
Examples that tend to work well:
- shipping address change before fulfillment
- subscription plan downgrade/upgrade
- appointment rescheduling
- payment link generation and dunning support
What I’ve found in real deployments: customers tolerate automation when it’s fast and accurate. They hate it when it stalls.
2) Agent assist that reduces Average Handle Time (AHT)
Even when a human stays in the loop, AI can cut time by:
- generating suggested replies based on policy
- drafting empathetic but specific language (without being robotic)
- pulling customer context from CRM fields
- recommending the next best action
Agent assist is often the fastest “safe win” because it improves efficiency without needing high-risk autonomy.
3) After-call work automation (the quiet budget drain)
Contact centers lose a shocking amount of time to documentation. AI can:
- summarize the conversation
- populate ticket fields
- create disposition notes
- draft follow-up emails
This is where you can see meaningful productivity gains without changing customer-facing workflows.
4) QA and coaching at scale
AI-driven QA can evaluate far more than the 1–3% of calls that humans typically sample. Modern approaches can:
- detect compliance language
- identify sentiment and escalation signals n- surface coaching opportunities by theme
AI doesn’t replace QA leaders; it gives them coverage and focus.
The implementation playbook: what enterprises get wrong (and how to avoid it)
Most companies get this wrong by treating AI agents like a widget you bolt onto chat. The successful programs treat them like a new operational layer with clear ownership.
Start with outcomes, not channels
Pick 2–3 measurable outcomes that matter this quarter:
- reduce cost per contact
- reduce AHT
- increase first-contact resolution
- increase containment without harming CSAT
Then map those outcomes to workflows.
A helpful internal question: “Which support tasks are repetitive, policy-bound, and expensive?” That’s agent territory.
Design the agent around workflows (not prompts)
A “prompt-only bot” breaks the moment reality shows up. A serviceable AI agent needs:
- a defined set of tools it’s allowed to use (refund tool, order lookup tool, etc.)
- a knowledge source with versioning and ownership
- a policy layer (“what it must never do”)
- handoff rules that preserve context
Think of it as product design plus operations, not just model selection.
Build guardrails that match risk, not fear
A mature approach uses tiered permissions:
- Read-only: look up order status, policy, account details
- Write with approval: draft a refund, human clicks “approve”
- Autonomous within limits: refund up to $X, replace within warranty window
This is how you scale safely: autonomy expands as performance proves itself.
Instrumentation: measure the right metrics early
If you don’t measure, you’ll argue.
Track these from week one:
- Containment rate (and why containment fails)
- Escalation rate by intent
- AHT for assisted agents vs. baseline
- Recontact rate within 7 days (great proxy for resolution quality)
- CSAT and “was this easy?” effort score where available
- Cost per resolution (not just cost per contact)
A line I use with exec teams: If the AI “solves” the chat but the customer calls back tomorrow, you didn’t automate support—you automated frustration.
“People also ask” questions your leadership team will bring up
These come up in almost every AI in customer service discussion.
Will AI agents replace human agents?
No. They replace specific tasks and reshape roles. Teams that adopt AI agents typically shift humans toward:
- complex exceptions
- escalations that require judgment
- retention and empathy-heavy conversations
- proactive outreach
Headcount may change over time, but the first-order effect is usually capacity creation—handling growth without hiring at the same rate.
Do AI agents hurt customer experience?
They hurt customer experience when they’re deployed as a gatekeeper. They improve it when:
- the agent can finish the job quickly,
- escalation is easy,
- and handoffs include full context.
Customers don’t hate automation. They hate being trapped.
What about hallucinations and compliance risk?
Treat customer service AI like any other production system: constrain it.
- Use retrieval over approved content.
- Limit actions with policy checks.
- Log every tool call and decision.
- Add “I don’t know” and escalation behaviors.
The mistake is assuming your only options are “fully autonomous” or “not allowed.” There’s a workable middle.
What Sierra’s momentum suggests about 2026 planning
Sierra’s $100M ARR milestone is a market signal that AI agent platforms are clearing the enterprise bar—security, governance, and measurable outcomes. For contact center leaders heading into 2026 planning, that changes the baseline expectation:
- Your competitors are probably already testing AI agents in customer service.
- Vendors are packaging stronger integrations and guardrails because buyers demand them.
- The “AI pilot” era is fading; leadership will ask for operating plans and ROI.
If you’re building your roadmap, I’d prioritize a phased approach:
- Agent assist + after-call work (fast wins, low risk)
- Contained Tier-1 workflows (returns, reschedules, address changes)
- Autonomy expansion with limits (refund thresholds, warranty rules)
- Omnichannel consistency (chat + voice + email using the same brain)
The goal is a contact center that scales without burning people out.
What to do next (if you want ROI, not a science project)
Start by choosing one workflow where the pain is obvious and measurable—returns status, billing disputes, appointment changes—and build an AI agent that can actually complete the steps. Keep the scope tight. Make handoff graceful. Instrument everything.
If you want a practical checklist, ask yourself three questions:
- What’s the single most common multi-step issue we handle?
- Which systems does the agent need to read and write to resolve it?
- What’s the smallest safe autonomy we can allow on day one?
Sierra’s growth is a reminder that the market is rewarding teams that operationalize AI agents, not just demo them. The next 12 months will favor customer service orgs that treat AI like a workforce strategy—measurable, governed, and built around real workflows.
Where would an AI agent remove the most friction in your contact center right now: faster resolution, shorter handle time, or fewer repeat contacts?