A practical ethical AI checklist for Singapore businesses using image generators—covering consent, guardrails, approvals, and vendor questions.

Ethical AI Image Tools: A Singapore Business Checklist
Non-consensual AI images aren’t a “consumer app problem.” They’re a business risk hiding in plain sight.
A recent Reuters investigation reported that xAI’s Grok could still generate sexualised images of people even when prompts explicitly stated the subject didn’t consent—despite new public curbs announced by X. Reuters testers reported Grok complied in a large share of attempts (45/55 in one batch; 29/43 in a later batch), while rival tools refused similar requests. That gap matters for any company using AI image generation in marketing, customer engagement, or internal workflows.
For Singapore businesses adopting AI business tools, the lesson is simple: you don’t “add AI” and hope policy banners fix it. You design a controlled system where unsafe outputs are hard to produce, hard to share, and easy to audit.
What the Grok story really teaches businesses
The direct takeaway: public-facing restrictions aren’t the same as real safety. Reuters found that while a public account stopped broadcasting large volumes of sexualised imagery, the underlying tool could still comply when prompted.
For a business, this is the difference between:
- A brand-safe “front door” (what your customers see on your website or social channels)
- The back office reality (what staff, agencies, or power users can still generate through the tool)
If your marketing team uses AI image tools for campaign concepts, product mock-ups, or persona visuals, it’s not hard to imagine how unsafe outputs could slip into:
- A shared Slack channel
- A pitch deck sent to a client
- An A/B test creative library
- A social post scheduled by mistake
That’s not theoretical. It’s exactly how “internal experimentation” becomes an incident.
The myth to drop: “We’ll just tell people not to do it”
Most companies get this wrong. They treat AI risks like a training issue.
Training helps, but guardrails beat guidelines. If a tool can be pushed into generating non-consensual sexualised content with minimal friction, the question isn’t “Will our team behave?” It’s:
“Are we comfortable betting our brand and compliance posture on perfect behaviour under deadline pressure?”
In practice, the answer should be no.
Why this matters more in Singapore than teams expect
Singapore is moving fast on AI adoption across SMEs and enterprises—especially in marketing, customer service, and operations. At the same time, expectations around data protection, harassment, and workplace conduct are high, and regulators globally are sharpening their stance on AI harms.
Even if you’re not building an AI model, using AI tools can create exposure across four areas:
- Workplace risk: Generating humiliating or sexualised images of colleagues can trigger HR actions, harassment claims, and reputational damage.
- Customer trust: A single leaked “AI experiment” can undermine years of brand building.
- Vendor risk: If your agency or contractor uses unsafe tools on your behalf, it’s still your campaign.
- Regulatory spillover: Global enforcement trends (UK Online Safety rules, state actions in the US, EU investigations referenced in the Reuters piece) often influence procurement requirements and enterprise governance—even for Singapore firms.
The practical point: responsible AI implementation is now part of basic business hygiene, like cybersecurity or PDPA compliance.
A practical checklist for safe AI image generation at work
Here’s what I recommend for Singapore businesses rolling out AI image tools—especially for marketing and customer engagement teams.
1) Decide what you will never generate
Start with a written “red line” list, not a vague policy statement.
A workable minimum:
- No non-consensual sexualised or intimate imagery (including “bikini edits,” underwear edits, humiliating poses)
- No real-person edits (employees, customers, influencers, public figures) unless you have documented consent and a defined use case
- No minors in any sexualised context (zero tolerance)
- No content intended to degrade, embarrass, or harass
This sounds obvious until you’re reviewing “funny” concept drafts on a Friday night.
2) Put consent into the workflow (not the footer)
If your process relies on “we assume consent because it’s internal,” you’re already in trouble.
Operationalise it:
- Require a consent flag (yes/no) before any real-person photo can be uploaded into an AI tool.
- Store consent in a central campaign folder (timestamped approvals).
- For talent/influencers: ensure contracts explicitly cover AI editing and synthetic variations (poses, outfits, backgrounds).
A good rule: if you can’t prove consent quickly, treat it as no consent.
3) Use tool-level guardrails, not only human review
The Reuters reporting highlights a key risk: different models behave very differently.
When evaluating AI business tools for image generation, require:
- Hard refusals for non-consensual sexual content (not “sometimes”)
- Protected-class and harassment safeguards
- Age-related safety controls (including ambiguity handling)
- User and admin controls (role-based access, content settings)
If a vendor can’t clearly explain how their safety system works—or can’t provide enterprise controls—treat that as a procurement red flag.
4) Add a “two-step” publishing gate for AI creatives
Incidents often happen when AI outputs flow straight into publishing tools.
Fix it with a simple gate:
- Generation environment (sandbox): AI outputs go to a restricted folder/channel.
- Approval environment (release): only reviewed assets can move into the live campaign library.
This is the same pattern used in software releases. Marketing should copy it.
5) Log prompts, outputs, and who generated what
If something goes wrong, you need facts fast.
Minimum viable audit logging:
- User ID
- Timestamp
- Prompt text (or hashed prompt if needed for confidentiality)
- Source image identifier (if any)
- Output image identifier
- Model/version
This supports investigations, vendor escalation, and internal accountability. It also discourages risky experimentation.
6) Train for “real scenarios,” not generic ethics slides
Teams don’t need another poster about “being responsible.” They need practice on situations they’ll actually face.
Use scenarios like:
- “Client asks for a competitor’s spokesperson in a revealing outfit”
- “Someone wants a ‘funny’ internal meme using a colleague’s photo”
- “Agency sends AI-generated lifestyle images—can we use them?”
Your team should know the response in one sentence, not after a committee meeting.
Choosing AI business tools in Singapore: what to ask vendors
If you’re deploying AI tools for marketing, comms, or customer engagement, ask these questions before renewal or rollout:
Safety and policy
- What categories are blocked by default (non-consensual intimate imagery, harassment, minors)?
- Are refusals consistent across interfaces (web app, API, integrations)?
- How do you handle “jailbreak” prompts and iterative coercion?
Governance
- Do you support role-based access and admin policy controls?
- Can we restrict uploads of real-person photos?
- Do you provide audit logs and retention controls?
Data handling
- Are uploaded images used for training? If yes, can we opt out?
- Where is data stored and processed?
- What’s your incident response SLA?
A vendor that answers these crisply is usually safer than one that hand-waves with “we take safety seriously.”
“People also ask” (quick answers you can reuse internally)
Can a company be liable if an employee generates a non-consensual sexualised image at work?
Yes—at minimum as a workplace conduct and governance failure, and potentially under harassment-related frameworks and sector-specific compliance expectations. Even when legal liability is complex, the reputational and HR impact is immediate.
Are public guardrails on social platforms enough?
No. The Reuters findings illustrate why: public posting restrictions don’t guarantee the model won’t generate the content privately.
What’s the safest default for marketing teams?
Avoid editing real people’s photos with generative tools unless you have explicit consent and a documented business need. Use synthetic models or licensed assets for concepting.
Where this fits in the “AI Business Tools Singapore” series
A lot of AI adoption content focuses on productivity wins: faster creatives, cheaper variations, more personalised campaigns. That’s real.
But the 2026 reality is that AI governance is now part of the tool choice, not an afterthought. The Grok episode is a cautionary tale: if a model can be pushed into producing harmful imagery, your brand becomes the blast radius—especially when teams move fast.
If you’re rolling out AI image generation in Singapore, treat it like any other high-impact system:
- Define boundaries
- Build guardrails
- Keep logs
- Make approvals boring and predictable
That’s how you get the upside of AI business tools without inheriting the downside.
Forward-looking question to take to your next marketing ops meeting: If a problematic AI image were generated today, could we trace it, stop it, and explain it within 24 hours?
Source referenced: Reuters reporting republished by CNA (landing page): https://www.channelnewsasia.com/business/exclusive-despite-new-curbs-elon-musks-grok-times-produces-sexualized-images-even-when-told-subjects-didnt-consent-5903771