Visa finds 74% use AI to browse but many won’t pay. Learn how Singapore retailers can build trustworthy AI checkout with security, consent, and transparency.

AI Checkout Trust Gap: What Singapore Retailers Do
74% of consumers across Asia Pacific already use AI to find and compare products—and then many of them stop right at the moment that matters most: payment. Visa’s latest APAC study puts a number on what a lot of e-commerce teams are seeing in their funnels: 45% want significantly stronger security before they’ll complete an AI-assisted checkout, and 32% are reluctant to share payment information with AI systems.
Most companies get this wrong. They assume the problem is “educating customers about AI.” The data suggests the opposite. In places like Singapore (34% above-average wariness), Australia, and New Zealand, the most digitally mature shoppers are more cautious, not less. They’ve seen enough breaches, dark patterns, and vague privacy policies to treat anything “agentic” at checkout as a risk.
This post is part of our “AI dalam Peruncitan dan E-Dagang” series, where we focus on practical AI use: cadangan peribadi (personalised recommendations), ramalan permintaan (demand forecasting), inventory management, and customer behaviour analytics—without losing consumer trust. Here’s how Singapore businesses can close the AI commerce trust gap using the right AI business tools, governance, and payment design.
The real problem: AI is trusted for discovery, not transactions
Answer first: Consumers trust AI for low-stakes tasks (discovery), but they demand proof for high-stakes tasks (payment). That trust boundary is where conversion rates live or die.
Visa’s APAC research (YouGov, Sept 2025; 14,764 respondents across 14 markets) paints a consistent pattern:
- Discovery is booming: 74% use AI to discover and compare products.
- Checkout is where trust collapses: 45% require stronger payment security assurances to proceed.
- Reluctance is structural, not niche: 32% are outright reluctant to share payment information with AI.
Why the split? Because the downside is different.
At discovery stage, bad AI means annoyance: irrelevant recommendations, extra scrolling, maybe a missed deal. At checkout, bad AI means: card exposure, account takeover, dispute headaches, or spending you didn’t authorise.
Here’s the line I keep coming back to: “AI doesn’t need to be smart to convert—AI needs to be accountable.”
What this looks like in a Singapore checkout funnel
Singapore’s e-commerce shoppers are fast, experienced, and not easily impressed.
If you’re introducing an AI shopping assistant, AI-driven bundles, or agentic ordering (“Buy this again for me”), expect a new drop-off point:
- Customers browse confidently with AI recommendations.
- They hesitate when the assistant asks to store a card, access PayNow/Wallet, or “place the order on your behalf.”
That hesitation often shows up as:
- Higher abandonment on “confirm payment”
- More “change of mind” cancellations right after purchase
- Increased customer support queries: “Did I authorise this?” “Why did the AI choose this?”
The fix isn’t adding more AI. It’s building trust primitives into the flow.
Why affluent and digitally savvy consumers are more sceptical
Answer first: In mature markets (including Singapore), sophisticated consumers understand the trade-offs—data sharing, manipulation risk, and unclear liability—so they demand higher standards.
Visa’s study reports that 39% of affluent households (US$8,000+ monthly income) have serious concerns about data usage, vs 29% among lower-income groups. And some of the most digital-first markets show higher wariness, including Singapore (34%).
That’s not anti-tech sentiment. It’s “I know what can go wrong” sentiment.
A useful mental model for Singapore retailers: affluent customers are often less persuaded by novelty and more persuaded by:
- clear controls
- transparent security measures
- explicit consent
- easy recourse if something goes wrong
The invisible trust killer: “Whose side is the AI on?”
Visa also notes that 26% of APAC consumers aren’t sure whether AI recommendations serve their interests or merchant profit margins.
That single doubt can poison the whole experience.
If your AI recommendations feel like upsell spam, customers won’t just ignore them—they’ll generalise: “If the recommendations are biased, the checkout might be biased too.”
For AI dalam peruncitan dan e-dagang, this matters because cadangan peribadi only works long-term when customers believe it’s genuinely helpful.
What “trustworthy AI checkout” actually requires (not buzzwords)
Answer first: Trust at checkout is built with verifiable security, transparent decisioning, and clear accountability—implemented in product, not in policy pages.
Singapore businesses often ask, “Which AI tool should we use?” My stance: pick tools that let you operationalise the following four requirements.
1) Security customers can recognise at a glance
If 45% want “significantly stronger security assurances,” you need security signals that are visible and understandable.
Practical examples:
- Passkeys / biometric confirmation for “AI will place the order” actions
- Tokenised payment credentials (don’t store raw card details)
- Step-up authentication when the AI changes shipping address, basket value, or delivery timing
- Real-time notifications: “Your AI assistant is about to pay S$X at Merchant Y—Approve?”
Even when strong infrastructure exists behind the scenes, customers need the “front-of-house” cues.
2) Consent that’s specific (not bundled)
The fastest way to lose trust is asking for broad permission upfront.
Instead:
- Separate consent for recommendations vs checkout execution
- Allow “one-time approval” mode by default
- Provide a clear switch: “AI can suggest” vs “AI can buy”
For Singapore’s PDPA-aware consumers, this also aligns with expectation: use limitation and clarity.
3) An explanation that matches the moment
Checkout explanations should be short, concrete, and tied to a user action.
Good:
- “Chosen because you bought this twice in the last 60 days and it’s currently 10% cheaper than usual.”
Bad:
- “Recommended by our AI model.”
This is where AI tools for customer analytics can help: you’re already tracking behaviour; use it to generate human-readable reasons. That’s “explainability” that actually improves conversion.
4) Clear liability and easy reversal
If agentic commerce is real, then disputes will happen. Make the resolution path obvious:
- One-click “Cancel AI order” within a short window
- A dedicated support category: “AI-assisted order issue”
- A receipt line showing what was automated vs what was manually selected
Trust grows when customers know they won’t be trapped.
A practical playbook for Singapore retailers (30 days)
Answer first: Start with AI for discovery and operations, then add controlled checkout automation only after you’ve implemented security, consent, and auditability.
A lot of businesses jump straight to flashy “agentic checkout.” I’d do it in phases.
Week 1: Audit where AI touches customer trust
Map every point where AI influences:
- pricing
- ranking and recommendations (cadangan peribadi)
- promotions and bundles
- checkout steps
- payment storage
Then label each interaction as low-risk (discovery) or high-risk (payment/identity).
Deliverable: a simple “AI touchpoints & risk” sheet your product and ops team can agree on.
Week 2: Add friction in the right place (yes, friction)
Not all friction is bad. At checkout, appropriate friction is reassurance.
Implement:
- explicit approval for any autonomous purchase
- spending caps (e.g., “AI can purchase up to S$50 without asking”)
- step-up authentication for changes and high value baskets
This is a place where AI business tools can help by learning normal purchase patterns and triggering step-up only when something’s unusual.
Week 3: Build a checkout audit trail that customer support can use
If your support team can’t answer “Why did this happen?”, customers won’t trust your AI.
What to log (minimally):
- what the AI changed (quantity, variant, delivery, merchant)
- what data it used (purchase history window, saved preferences)
- what approvals were captured (time, method, device)
This improves fraud handling and reduces refund costs.
Week 4: Launch a “suggest-first” mode and measure trust, not just sales
Roll out two modes:
- Suggest-first: AI recommends, user clicks to confirm.
- Auto-buy (limited): only for repeat purchases, capped value, strong authentication.
Measure:
- checkout completion rate by mode
- support tickets per 1,000 orders
- refund/chargeback rates
- opt-in rate to auto-buy after 2–3 successful suggest-first purchases
If opt-in is low, don’t force it. Improve explanations and controls first.
People also ask: “Should Singapore SMEs avoid agentic commerce?”
Answer first: No—SMEs should adopt AI where it removes operational pain (forecasting, inventory, service) and earn the right to automate checkout gradually.
Agentic commerce is attractive because it promises higher conversion and repeat purchases. But the Visa data signals a clear constraint: trust is the limiting factor, especially in developed markets.
For many Singapore SMEs, the highest ROI AI uses are still behind the scenes:
- ramalan permintaan to reduce stockouts before campaigns
- inventory optimisation to cut cash tied up in slow movers
- customer service automation with strict guardrails
- smarter segmentation for promotions that feel relevant (not creepy)
Then, once customers consistently experience value, you can introduce more automation at checkout—carefully.
A useful rule: if a customer wouldn’t let a junior staff member do it unsupervised, don’t let an AI agent do it unsupervised either.
Where this goes next for AI dalam peruncitan dan e-dagang
AI commerce in APAC isn’t “blocked.” It’s bifurcated: discovery is normal, checkout automation is still earning trust. Visa’s numbers—74% using AI for discovery vs 45% demanding stronger security to proceed—make the business case plain: the next competitive edge isn’t more recommendations. It’s trustworthy checkout design.
For Singapore retailers, this is an opportunity. If you can prove security, give customers control, and make AI decisions explainable, you’ll win the segment others struggle with: digitally savvy buyers who want convenience but won’t trade away safety.
If you’re planning to introduce AI tools for personalised recommendations, customer analytics, or agentic reorder flows, what’s the one piece of control you’ll put in front of the customer first: spending caps, explicit approvals, or transparent “why this” explanations?