AI Strategy That Survives Analyst Scrutiny in 2026

AI Business Tools Singapore••By 3L3C

Build an AI strategy that stands up to scrutiny. Lessons from IQVIA’s AI defence—and a practical framework for Singapore businesses to adopt AI safely.

IQVIAenterprise AIAI governanceAI ROISingapore SMEsbusiness strategy
Share:

Featured image for AI Strategy That Survives Analyst Scrutiny in 2026

AI Strategy That Survives Analyst Scrutiny in 2026

A six-day selloff that erased about US$830 billion from software and services stocks is a loud signal: markets don’t just care whether a company “uses AI”. They care whether AI strengthens the business moat or quietly makes the company replaceable.

That’s why IQVIA’s recent earnings call (and the analyst pushback it triggered) is worth paying attention to—especially if you’re building an AI roadmap for a Singapore business. IQVIA is effectively arguing: AI isn’t the threat; being generic is the threat. Their claim rests on something very concrete—proprietary data assets and workflow integration—not fancy demos.

This post is part of the AI Business Tools Singapore series, where we focus on practical AI adoption for marketing, operations, and customer engagement. IQVIA’s situation gives us a useful template: how to invest in AI in a way that holds up under skeptical questioning from finance, leadership, and customers.

What IQVIA’s earnings call really revealed about AI risk

Answer first: IQVIA’s debate wasn’t “AI good vs AI bad.” It was whether new AI tools (from players like Anthropic and others) can commoditise professional services and squeeze margins.

In the Reuters/CNA report, analysts questioned whether rapid advances in AI could displace parts of IQVIA’s services. That concern isn’t random—2025–2026 has been full of examples where AI features show up inside tools people already pay for, pushing standalone vendors into uncomfortable territory.

IQVIA’s CEO Ari Bousbib pushed back hard, calling the displacement fear “really frustrating” and arguing the opposite: AI increases the value of IQVIA because it has “the largest proprietary healthcare information assets in the world.” The point is simple: general-purpose models are powerful, but they don’t magically recreate trusted, legally usable, longitudinal datasets with governance, permissions, and domain-specific context.

Here’s the part many teams miss: markets aren’t rewarding “AI adoption.” They’re rewarding defensibility + distribution + data.

The numbers that matter (and why they matter)

The article includes a few figures that show how sentiment and fundamentals can diverge:

  • IQVIA shares fell over 8% after the call.
  • The broader selloff wiped ~US$830B in market value across software/services.
  • IQVIA guided 2026 adjusted EPS of US$12.55–US$12.85 vs US$12.95 expected (LSEG), partly due to ~US$80M higher interest expense.
  • Yet IQVIA’s Q4 EPS (US$3.42) and revenue (US$4.36B) beat estimates.

For Singapore operators reading this: even when execution is fine, “AI narrative risk” can dominate the conversation. Your AI plan needs to work in the real world and be explainable to stakeholders who are worried about becoming the next commoditised service.

The real lesson for Singapore businesses: build an “AI moat,” not an AI feature

Answer first: If your AI initiative can be replicated by plugging ChatGPT into a spreadsheet, you don’t have a strategy—you have a short-lived productivity hack.

IQVIA’s defence rests on two pillars that translate well to Singapore SMEs and mid-market firms:

  1. Unique inputs (data, processes, context, permissions)
  2. Workflow ownership (where AI is embedded and measured)

Pillar 1: Unique inputs — your proprietary data is the advantage

Most Singapore businesses don’t have “the largest healthcare dataset in the world.” That’s fine. You don’t need massive scale; you need non-public, high-signal data that connects to revenue or risk.

Examples of proprietary data that actually matters:

  • Customer engagement history: WhatsApp transcripts (properly consented), call logs, objections, proposal iterations
  • Operational reality: fulfilment times, defect reasons, supplier lead times, stockouts, returns
  • Pricing and margin detail: discount patterns, win/loss reasons, competitor mentions
  • Domain knowledge: SOPs, compliance playbooks, service checklists, claims workflows

When teams say “we’ll use AI for customer service,” I usually ask: with what knowledge base, what escalation rules, and what audit trail? If you can’t answer that, AI will create more noise than value.

Pillar 2: Workflow ownership — AI wins when it’s hard to remove

IQVIA isn’t pitching AI as a side tool; it’s implying AI will be woven into how clients do drug development and decision-making. For Singapore companies, the equivalent is embedding AI into daily workflows:

  • Marketing: content QA, compliance checks, audience segmentation, campaign reporting
  • Sales: lead triage, call summaries, next-best-action suggestions tied to CRM fields
  • Operations: demand forecasting, anomaly detection in inventory, supplier risk scoring
  • Finance: invoice exception handling, spend categorisation, collections prioritisation

If AI output doesn’t land back into the system of record (CRM/ERP/helpdesk) with accountability, it’s just “nice output” that nobody trusts.

A practical AI adoption framework (that answers analyst-style questions)

Answer first: The fastest way to reduce AI skepticism is to define ROI and risk in advance—then measure both.

Analysts essentially asked IQVIA: “Show us that AI won’t eat your lunch.” Your leadership will ask a similar question: “Are we funding something that makes us stronger—or something our competitors can copy in a week?”

Here’s a framework I’ve found works because it’s plain, testable, and forces trade-offs.

Step 1: Classify AI use cases into three buckets

  1. Efficiency plays (save time/cost)

    • Example: auto-drafting standard emails, summarising meeting notes
    • Risk: easy to copy; ROI can be real but defensibility is low
  2. Quality and control plays (reduce errors, improve consistency)

    • Example: automated compliance checks for ads; contract clause flagging
    • Risk: requires governance, but value compounds over time
  3. Moat plays (increase switching costs or unique insight)

    • Example: customer churn model built on your behavioural data; operations predictor based on your SKU-level history
    • Risk: needs data maturity and process change, but hardest to replicate

Many businesses stop at bucket #1, then wonder why AI feels underwhelming after the first month.

Step 2: Define success metrics before you ship anything

Use metrics that finance and operations will accept. Good examples:

  • Time-to-first-response down by X minutes
  • Ticket deflection up by X% without CSAT dropping
  • Forecast error (MAPE) reduced by X points
  • Lead-to-meeting conversion up by X%
  • Days sales outstanding (DSO) down by X days

Also define a “harm metric,” especially for customer-facing AI:

  • Hallucination rate in audited samples
  • Escalation rate
  • Compliance violations detected
  • Refund/return rate changes

If you can’t measure harm, you’ll either over-trust the model or ban it entirely.

Step 3: Put guardrails where they belong (not everywhere)

Answer first: Governance is most effective when it’s targeted—high-risk workflows get strict controls; low-risk workflows get speed.

A sensible setup for many Singapore firms:

  • Low risk (internal drafting): allow AI with a usage policy and basic logging
  • Medium risk (customer comms): require templates, approvals, and retrieval from a controlled knowledge base
  • High risk (regulated claims, healthcare, finance advice): human-in-the-loop, full audit trail, and restricted model access

This is how you keep momentum without creating a compliance headache.

“Will AI replace us?” The better question to ask in 2026

Answer first: AI replaces tasks, not companies—unless the company’s value is just those tasks.

IQVIA’s CEO is making an argument many service businesses need to hear: if your differentiation is access to trusted data, deep domain workflows, and outcomes you’re accountable for, AI tends to amplify your advantage.

If your business is primarily reselling generic knowledge work (“we make decks,” “we write reports,” “we summarise research”), you’re exposed. Not because AI is smarter than you, but because buyers now have cheaper ways to get “good enough” outputs.

A quick self-audit for Singapore teams

Use this checklist with your leadership team:

  • Do we own data that competitors can’t legally or practically replicate?
  • Are our processes documented well enough to automate parts safely?
  • Are we embedding AI into the tools people already use (CRM/ERP/helpdesk)?
  • Can we prove ROI with operational metrics within 60–90 days?
  • Do we have a policy for sensitive data, model access, and approvals?

If you answered “no” to three or more, your first AI project should probably be data readiness + one workflow—not a broad rollout.

What to do next: an AI tools plan you can defend

Markets are nervous for a reason: AI is compressing differentiation in places that used to feel safe. IQVIA’s story highlights a more durable approach—pair AI with assets and workflows that are hard to copy.

If you’re planning AI adoption in Singapore right now, I’d take a clear stance: start with one process where you have strong data, real volume, and a measurable pain point. Then build from there. The companies that win in 2026 won’t be the ones with the most AI pilots—they’ll be the ones that can show repeatable outcomes without taking on reckless risk.

Where would AI create the most defensible advantage in your business: marketing, operations, or customer engagement?

Source article: https://www.channelnewsasia.com/business/iqvia-backs-ai-strategy-analysts-question-impact-business-5910261