AI repeats misinformation more when sources look official. Learn safeguards Singapore teams can use in marketing, ops, and customer engagement.

When âLegitâ Sources Trick AI at Work in Singapore
A Reuters report carried by CNA this week highlighted a problem most companies still underestimate: AI can be more easily fooled by misinformation when itâs written in authoritative, professional languageâthe exact tone businesses deal with every day.
In the study (published in The Lancet Digital Health), researchers tested 20 large language models (LLMs) and found that models âbelievedâ fabricated medical information about 32% of the time overall. When the misinformation appeared inside a realistic-looking hospital discharge note, the rate rose to almost 47%. When the same kind of falsehood showed up in a casual social media format (Reddit), propagation dropped to 9%.
Thatâs medicineâbut the business lesson is broader. If your AI assistant is reading polished vendor proposals, formal policy documents, audit summaries, or consultant slide decks, the risk pattern is the same. The more âofficialâ the source looks, the more likely your AI tool is to repeat it confidently.
This post is part of the AI Business Tools Singapore series, where we get practical about adopting AI for marketing, operations, and customer engagementâwithout creating new risks.
What the study reveals (and why businesses should care)
The key finding is simple and uncomfortable: LLMs tend to treat confident, domain-sounding language as true by default.
The researchers fed AI systems three kinds of inputs:
- Real hospital discharge summaries with a single fabricated recommendation inserted
- Common health myths collected from Reddit
- 300 short clinical scenarios written by physicians
Then they hit the models with more than 1 million promptsâquestions and instructions a user might realistically ask.
The numbers you should remember
- 32%: overall likelihood that models âbelievedâ fabricated information across sources
- ~47%: when misinformation appeared in a realistic hospital note (looks authoritative)
- 9%: when misinformation came from Reddit (looks informal)
- Some models were susceptible to up to 63.6% of false claims
The same study also noted that prompt phrasing matters: if the user adopts an authoritative tone (âIâm a senior clinician⌠do you consider it correct?â), the model is more likely to agree with the falsehood.
Why this maps directly onto everyday business workflows
Singapore companies increasingly use AI tools in places where âofficial-looking textâ is everywhere:
- Sales and procurement: vendor proposals, quotations, compliance statements
- HR and legal: policies, disciplinary letters, contract clauses
- Finance and risk: audit findings, internal controls descriptions, board packs
- Customer engagement: product FAQs, claims in marketing collateral, competitor comparisons
If your AI summarises, rewrites, or answers questions from these documents, it can launder errors into confident recommendationsâand your team may treat the output as âvalidatedâ because it sounds polished.
A useful rule: LLMs are excellent at producing plausible text. They are not designed to âknowâ whatâs true without verification steps.
The real risk: âtrust launderingâ through AI
The biggest business danger isnât a model hallucinating a weird fact. Itâs something subtler: AI makes bad information feel endorsed.
Hereâs how trust laundering happens:
- An authoritative-looking document contains a mistake (or a misleading claim).
- Your AI tool summarises it, rewrites it, or answers questions about it.
- The AIâs confident tone removes friction (âsounds rightâ).
- The output gets forwarded internally, pasted into a deck, or sent to customers.
Now the misinformation has gone from âone questionable sentence in a PDFâ to âcompany-approved guidance.â
Practical Singapore examples (where this hurts)
- Marketing compliance: A supplement brand asks an AI tool to generate ad copy from a supplierâs brochure. The brochure overstates a health benefit. The AI repeats it cleanly, and suddenly youâve got claims that trigger regulatory or platform takedown risk.
- Procurement decisions: A vendorâs security questionnaire uses the right buzzwords (âISO-aligned,â âzero trust,â âend-to-end encryptionâ). Your AI summarises it as âmeets enterprise security requirementsâ without checking evidence.
- Customer support: A chatbot trained on internal memos and product notes may turn an internal assumption into a public promise (âyes, we support that featureâ)âand support tickets explode.
This matters because brand trust in Singapore is hard-won and quick to lose, especially in regulated industries (finance, health, education, public-facing services).
Why âjust use a better modelâ isnât enough
The CNA story mentions that OpenAIâs GPT models were among the least susceptible in this test. Thatâs useful, but Iâll take a firm stance here: model choice helps, but it doesnât solve the problem.
Even strong models can:
- Repeat false claims when the source looks official
- Over-agree with authoritative prompts
- Miss missing context (whatâs omitted can be as important as whatâs written)
If your AI workflow doesnât include verification, youâre relying on luck and brand goodwill.
The prompt problem: your team can accidentally âcoachâ the model into agreeing
The study found AI was more likely to accept misinformation when the prompt adopted an authoritative endorsement.
Business translation: employees do this all the time.
- âThis is our approved pricing logicâconfirm it.â
- âOur legal counsel said this is fineârewrite it for customers.â
- âThis report is from HQâsummarise the risks.â
When people signal certainty, models often mirror it.
A safer playbook for using AI business tools in Singapore
The goal isnât to scare teams away from AI. The goal is to use AI for speed without sacrificing correctness.
Below is a practical playbook you can implement across marketing, operations, and customer engagement.
1) Treat AI outputs as drafts, not decisions
Answer first: AI should propose; a human should dispose.
Where this matters most:
- Anything customer-facing (ads, FAQs, emails, chatbot answers)
- Anything contractual (terms, privacy statements, vendor commitments)
- Anything safety- or compliance-adjacent (health, finance, claims)
A simple operating rule Iâve found works: If a mistake would cost money or reputation, AI canât be the final approver.
2) Add âevidence requirementsâ to your prompts
If your team uses an AI assistant to answer questions from documents, bake in verification behavior.
Try prompt patterns like:
- âAnswer only using the provided document. Quote the exact sentence(s) you used.â
- âList any claims that require external verification.â
- âIf the document doesnât provide evidence, say âNot supported in source.ââ
This pushes the model toward traceability. It wonât be perfect, but it reduces confident freewheeling.
3) Use retrieval with source citations for internal knowledge (RAG)
Answer first: If youâre deploying AI for internal Q&A, use a retrieval layer that cites sources, not a freeform chatbot.
A basic RAG setup (Retrieval-Augmented Generation) helps because the model is anchored to your content, and users can see where statements come from.
But donât stop at âit retrieved something.â Add guardrails:
- Prefer curated, versioned sources (final policies, approved playbooks)
- Block unapproved folders (draft decks, random exports)
- Require citations for high-risk categories (pricing, legal, compliance)
4) Build a âsource legitimacy isnât truthâ checklist
The studyâs punchline is that official-looking text fools models. So train your team on this single sentence:
Authority formatting increases believability, not accuracy.
Checklist for staff using AI business tools:
- Who authored this source? (role, accountability)
- Is it current? (version date, superseded policies)
- Is there evidence? (data, references, logs, approvals)
- Is it internally consistent? (numbers match across sections)
- Is there a second source? (independent confirmation)
5) Put âhigh-risk topicsâ behind stronger controls
Not all tasks need the same safety level. Classify AI use cases:
- Low risk: brainstorming headlines, rewriting tone, meeting summaries
- Medium risk: internal knowledge Q&A, proposal comparisons
- High risk: medical/health claims, financial guidance, legal terms, safety advice
For high-risk topics:
- Require citations + human approval
- Log prompts and outputs
- Use constrained templates (structured answers)
- Consider disabling certain response types (âdiagnose,â âguarantee,â âpromiseâ)
How to pick reliable AI tools (without falling for marketing)
If youâre evaluating AI business tools in Singapore right now, prioritise features that reduce misinformation propagation.
What to look for
- Source citations (not optional)
- Admin controls for approved knowledge bases
- Audit logs (who asked what, what the system answered)
- Role-based access (sales shouldnât see HR files)
- Evaluation tooling (test sets, red-teaming, accuracy checks)
What to be skeptical of
- âTrained on the internetâ as a quality claim
- Demos that show fluent answers but no sources
- Tools that canât explain where an answer came from
A blunt truth: a tool that canât cite evidence will eventually create a customer incident. Itâs not a matter of ifâjust when.
What to do next if your company already uses AI daily
Most Singapore teams are already using ChatGPT-style assistants informally. Waiting for a perfect policy is a mistake.
Start with three actions you can do this month:
- Create an âAI Allowed / Not Allowedâ list for common tasks (marketing claims, pricing promises, legal terms).
- Standardise two prompt templates: one for summaries with quotes, one for Q&A with citations.
- Run a misinformation fire drill: feed your AI a polished-but-wrong internal memo and see if it repeats it.
The fire drill is the fastest way to get leadership attention because itâs concrete. People stop arguing about âAI riskâ when they watch a confident wrong answer appear.
AI is already embedded in business operationsâespecially in marketing and customer engagement workflows where speed wins. The CNA-reported study is a reminder that speed without validation becomes a brand risk, particularly when misinformation looks legitimate.
If youâre building an AI stack as part of your AI Business Tools Singapore roadmap, prioritise tools and workflows that force traceability: citations, evidence checks, and human approval for high-impact outputs.
What would happen in your company if a polished, authoritative document contained one wrong lineâand your AI repeated it to customers as fact?