When “Legit” Sources Trick AI at Work in Singapore

AI Business Tools Singapore••By 3L3C

AI repeats misinformation more when sources look official. Learn safeguards Singapore teams can use in marketing, ops, and customer engagement.

ai-literacyllm-riskragai-governancecustomer-experiencemarketing-ops
Share:

Featured image for When “Legit” Sources Trick AI at Work in Singapore

When “Legit” Sources Trick AI at Work in Singapore

A Reuters report carried by CNA this week highlighted a problem most companies still underestimate: AI can be more easily fooled by misinformation when it’s written in authoritative, professional language—the exact tone businesses deal with every day.

In the study (published in The Lancet Digital Health), researchers tested 20 large language models (LLMs) and found that models “believed” fabricated medical information about 32% of the time overall. When the misinformation appeared inside a realistic-looking hospital discharge note, the rate rose to almost 47%. When the same kind of falsehood showed up in a casual social media format (Reddit), propagation dropped to 9%.

That’s medicine—but the business lesson is broader. If your AI assistant is reading polished vendor proposals, formal policy documents, audit summaries, or consultant slide decks, the risk pattern is the same. The more “official” the source looks, the more likely your AI tool is to repeat it confidently.

This post is part of the AI Business Tools Singapore series, where we get practical about adopting AI for marketing, operations, and customer engagement—without creating new risks.

What the study reveals (and why businesses should care)

The key finding is simple and uncomfortable: LLMs tend to treat confident, domain-sounding language as true by default.

The researchers fed AI systems three kinds of inputs:

  • Real hospital discharge summaries with a single fabricated recommendation inserted
  • Common health myths collected from Reddit
  • 300 short clinical scenarios written by physicians

Then they hit the models with more than 1 million prompts—questions and instructions a user might realistically ask.

The numbers you should remember

  • 32%: overall likelihood that models “believed” fabricated information across sources
  • ~47%: when misinformation appeared in a realistic hospital note (looks authoritative)
  • 9%: when misinformation came from Reddit (looks informal)
  • Some models were susceptible to up to 63.6% of false claims

The same study also noted that prompt phrasing matters: if the user adopts an authoritative tone (“I’m a senior clinician… do you consider it correct?”), the model is more likely to agree with the falsehood.

Why this maps directly onto everyday business workflows

Singapore companies increasingly use AI tools in places where “official-looking text” is everywhere:

  • Sales and procurement: vendor proposals, quotations, compliance statements
  • HR and legal: policies, disciplinary letters, contract clauses
  • Finance and risk: audit findings, internal controls descriptions, board packs
  • Customer engagement: product FAQs, claims in marketing collateral, competitor comparisons

If your AI summarises, rewrites, or answers questions from these documents, it can launder errors into confident recommendations—and your team may treat the output as “validated” because it sounds polished.

A useful rule: LLMs are excellent at producing plausible text. They are not designed to “know” what’s true without verification steps.

The real risk: “trust laundering” through AI

The biggest business danger isn’t a model hallucinating a weird fact. It’s something subtler: AI makes bad information feel endorsed.

Here’s how trust laundering happens:

  1. An authoritative-looking document contains a mistake (or a misleading claim).
  2. Your AI tool summarises it, rewrites it, or answers questions about it.
  3. The AI’s confident tone removes friction (“sounds right”).
  4. The output gets forwarded internally, pasted into a deck, or sent to customers.

Now the misinformation has gone from “one questionable sentence in a PDF” to “company-approved guidance.”

Practical Singapore examples (where this hurts)

  • Marketing compliance: A supplement brand asks an AI tool to generate ad copy from a supplier’s brochure. The brochure overstates a health benefit. The AI repeats it cleanly, and suddenly you’ve got claims that trigger regulatory or platform takedown risk.
  • Procurement decisions: A vendor’s security questionnaire uses the right buzzwords (“ISO-aligned,” “zero trust,” “end-to-end encryption”). Your AI summarises it as “meets enterprise security requirements” without checking evidence.
  • Customer support: A chatbot trained on internal memos and product notes may turn an internal assumption into a public promise (“yes, we support that feature”)—and support tickets explode.

This matters because brand trust in Singapore is hard-won and quick to lose, especially in regulated industries (finance, health, education, public-facing services).

Why “just use a better model” isn’t enough

The CNA story mentions that OpenAI’s GPT models were among the least susceptible in this test. That’s useful, but I’ll take a firm stance here: model choice helps, but it doesn’t solve the problem.

Even strong models can:

  • Repeat false claims when the source looks official
  • Over-agree with authoritative prompts
  • Miss missing context (what’s omitted can be as important as what’s written)

If your AI workflow doesn’t include verification, you’re relying on luck and brand goodwill.

The prompt problem: your team can accidentally “coach” the model into agreeing

The study found AI was more likely to accept misinformation when the prompt adopted an authoritative endorsement.

Business translation: employees do this all the time.

  • “This is our approved pricing logic—confirm it.”
  • “Our legal counsel said this is fine—rewrite it for customers.”
  • “This report is from HQ—summarise the risks.”

When people signal certainty, models often mirror it.

A safer playbook for using AI business tools in Singapore

The goal isn’t to scare teams away from AI. The goal is to use AI for speed without sacrificing correctness.

Below is a practical playbook you can implement across marketing, operations, and customer engagement.

1) Treat AI outputs as drafts, not decisions

Answer first: AI should propose; a human should dispose.

Where this matters most:

  • Anything customer-facing (ads, FAQs, emails, chatbot answers)
  • Anything contractual (terms, privacy statements, vendor commitments)
  • Anything safety- or compliance-adjacent (health, finance, claims)

A simple operating rule I’ve found works: If a mistake would cost money or reputation, AI can’t be the final approver.

2) Add “evidence requirements” to your prompts

If your team uses an AI assistant to answer questions from documents, bake in verification behavior.

Try prompt patterns like:

  • “Answer only using the provided document. Quote the exact sentence(s) you used.”
  • “List any claims that require external verification.”
  • “If the document doesn’t provide evidence, say ‘Not supported in source.’”

This pushes the model toward traceability. It won’t be perfect, but it reduces confident freewheeling.

3) Use retrieval with source citations for internal knowledge (RAG)

Answer first: If you’re deploying AI for internal Q&A, use a retrieval layer that cites sources, not a freeform chatbot.

A basic RAG setup (Retrieval-Augmented Generation) helps because the model is anchored to your content, and users can see where statements come from.

But don’t stop at “it retrieved something.” Add guardrails:

  • Prefer curated, versioned sources (final policies, approved playbooks)
  • Block unapproved folders (draft decks, random exports)
  • Require citations for high-risk categories (pricing, legal, compliance)

4) Build a “source legitimacy isn’t truth” checklist

The study’s punchline is that official-looking text fools models. So train your team on this single sentence:

Authority formatting increases believability, not accuracy.

Checklist for staff using AI business tools:

  • Who authored this source? (role, accountability)
  • Is it current? (version date, superseded policies)
  • Is there evidence? (data, references, logs, approvals)
  • Is it internally consistent? (numbers match across sections)
  • Is there a second source? (independent confirmation)

5) Put “high-risk topics” behind stronger controls

Not all tasks need the same safety level. Classify AI use cases:

  • Low risk: brainstorming headlines, rewriting tone, meeting summaries
  • Medium risk: internal knowledge Q&A, proposal comparisons
  • High risk: medical/health claims, financial guidance, legal terms, safety advice

For high-risk topics:

  • Require citations + human approval
  • Log prompts and outputs
  • Use constrained templates (structured answers)
  • Consider disabling certain response types (“diagnose,” “guarantee,” “promise”)

How to pick reliable AI tools (without falling for marketing)

If you’re evaluating AI business tools in Singapore right now, prioritise features that reduce misinformation propagation.

What to look for

  • Source citations (not optional)
  • Admin controls for approved knowledge bases
  • Audit logs (who asked what, what the system answered)
  • Role-based access (sales shouldn’t see HR files)
  • Evaluation tooling (test sets, red-teaming, accuracy checks)

What to be skeptical of

  • “Trained on the internet” as a quality claim
  • Demos that show fluent answers but no sources
  • Tools that can’t explain where an answer came from

A blunt truth: a tool that can’t cite evidence will eventually create a customer incident. It’s not a matter of if—just when.

What to do next if your company already uses AI daily

Most Singapore teams are already using ChatGPT-style assistants informally. Waiting for a perfect policy is a mistake.

Start with three actions you can do this month:

  1. Create an “AI Allowed / Not Allowed” list for common tasks (marketing claims, pricing promises, legal terms).
  2. Standardise two prompt templates: one for summaries with quotes, one for Q&A with citations.
  3. Run a misinformation fire drill: feed your AI a polished-but-wrong internal memo and see if it repeats it.

The fire drill is the fastest way to get leadership attention because it’s concrete. People stop arguing about “AI risk” when they watch a confident wrong answer appear.


AI is already embedded in business operations—especially in marketing and customer engagement workflows where speed wins. The CNA-reported study is a reminder that speed without validation becomes a brand risk, particularly when misinformation looks legitimate.

If you’re building an AI stack as part of your AI Business Tools Singapore roadmap, prioritise tools and workflows that force traceability: citations, evidence checks, and human approval for high-impact outputs.

What would happen in your company if a polished, authoritative document contained one wrong line—and your AI repeated it to customers as fact?

Source: https://www.channelnewsasia.com/business/medical-misinformation-more-likely-fool-ai-if-source-appears-legitimate-study-shows-5919046