AI agent accountability is a growing risk for Singapore firms. Learn who’s liable, why “agent-as-user” is dangerous, and the controls to fix it.
AI Agents in Singapore: Who’s Liable When They Act?
A lot of Singapore companies are rolling out AI agents in the most dangerous way possible: the agent logs in “as a user.” Same permissions, same audit trail, same blame. If the agent makes a bad call in your CRM, triggers a refund spiral in your billing system, or pulls customer data it shouldn’t have, your logs often say you did it.
That’s not a theoretical problem. It’s a governance and legal exposure problem that shows up the moment agents move from “suggesting” to doing—creating tickets, changing records, approving payments, emailing customers, or delegating work to other agents. Tobin South (WorkOS; Stanford’s Loyal Agents Initiative; OpenID Foundation AI Identity group) put it plainly: most agents today impersonate humans, and that creates massive accountability blind spots.
This post is part of the AI Business Tools Singapore series, where we focus on practical adoption—marketing, ops, customer engagement—without stepping on compliance landmines. If your 2026 roadmap includes autonomous agents, you need to treat identity and authorisation as first-class product requirements, not “security’s problem later.”
The real risk: your agent is pretending to be you
If an AI agent acts under a shared human identity, you lose clean attribution. That’s the core issue. When something goes wrong, you can’t confidently answer: Was this a human action, an agent action, or an attacker using the same credentials?
Here’s why that matters in day-to-day business operations in Singapore:
- Internal investigations stall. If your audit trail can’t distinguish agent actions from employee actions, the incident response team burns days reconstructing intent.
- Regulatory exposure increases. Many compliance obligations rely on demonstrable controls, least-privilege access, and verifiable logs.
- Fraud gets easier. An attacker doesn’t need fancy exploits if they can piggyback on an over-permissioned “agent-as-user” account.
A sentence worth remembering: “If you can’t prove an action was performed by an agent, you also can’t prove a human didn’t do it.” That’s how liability headaches start.
A concrete scenario (marketing + finance)
You deploy an AI agent to help sales ops and marketing:
- It reads HubSpot/Salesforce notes to draft outreach.
- It updates pipeline stages.
- It triggers promotional credits in your billing tool for “at-risk” accounts.
One night it misclassifies a set of enterprise customers as “at-risk” and issues credits. Finance sees a revenue leak. Sales sees pipeline chaos. The logs show the changes were made by a sales manager’s account because the agent used delegated access that looks identical to the manager’s.
Now you’re not just fixing data. You’re answering: who approved this behaviour, who monitored it, and who is accountable for the loss?
Why we’re repeating the early internet’s security mistakes
AI agents are scaling faster than governance frameworks can keep up. South’s comparison to the early web is sharp for a reason: the internet shipped first, then spent decades bolting on HTTPS, authentication standards, and safer defaults. Agents are being deployed across CRMs, ERPs, and customer support platforms in months.
The difference this time is impact. A compromised website used to get defaced. A compromised agent can:
- initiate financial transfers or fraudulent refunds
- exfiltrate customer PII at machine speed
- alter medical or insurance records
- spawn additional agents (or agent workflows) before anyone notices
The uncomfortable reality? Retrofitting accountability into an agent ecosystem is far harder than building it in from day one. Once agents are embedded in “how work gets done,” changing identity models becomes politically and operationally painful.
Fragmentation makes it worse
Another accelerant is ecosystem fragmentation. Platforms are racing to introduce proprietary “agent identity” approaches. That creates two problems:
- Interoperability breaks. Your agent that touches Google Workspace, Microsoft 365, a CRM, and a data warehouse ends up with mismatched identity semantics.
- Vulnerabilities multiply. Every custom approach becomes a new surface area for misconfiguration.
For Singapore businesses that operate across multiple SaaS systems (common for mid-market and regional HQ teams), fragmentation isn’t academic—it’s operational risk.
Legal responsibility: you can’t govern what you can’t attribute
Legal responsibility typically follows control: who designed, deployed, authorised, and supervised the system. But when attribution is murky, even basic questions become contentious:
- Did an employee authorise this action?
- Did the agent exceed its mandate?
- Was it a foreseeable failure mode?
- Did the company apply reasonable security controls?
Even without naming specific statutes, the governance direction is clear: organisations are expected to show reasonable controls—especially when customer data, financial actions, or safety-critical decisions are involved.
“Paper trail” isn’t paperwork—it’s your safety net
South’s point about a “security camera that never blinks” is a good mental model. For agents, auditability needs to show:
- Agent identity (which agent, version, and policy)
- Human authoriser (who delegated authority)
- Scope (what systems and actions were permitted)
- Context (why the agent acted: ticket ID, workflow run, customer case)
- Integrity (logs are tamper-evident and retained)
If you can’t produce that quickly during an incident, you’ll struggle to defend decisions to customers, auditors, insurers, or regulators.
Recursive delegation: when agents hire other agents
Recursive delegation is when an agent delegates tasks to other agents. It’s powerful—an “orchestrator” agent can spin up specialists for research, billing adjustments, customer replies, or data reconciliation.
It’s also where traditional permission models collapse.
Why current authorisation frameworks fail
Most enterprise access control assumes:
- a human user
- a fixed role
- a bounded number of actions per minute
- explicit intent behind each click
Agents don’t behave like that. They can execute thousands of actions per minute, chain decisions, and act across systems. If you give an agent broad OAuth scopes “just to make it work,” you’ve effectively issued master keys.
A practical stance I recommend: treat delegation like issuing a corporate credit card. You wouldn’t hand one out without a limit, a purpose, and monitoring. Agent permissions should work the same way.
What “good” delegation looks like
For Singapore teams deploying AI business tools and agents, aim for:
- Time-bounded access (permissions expire automatically)
- Task-bounded access (only actions needed for a workflow)
- Rate limits (cap actions/minute to contain damage)
- Second-person approval gates (for money movement, mass email, data exports)
- Kill switches (one-click disable + token revocation)
These controls don’t slow teams down much if they’re designed upfront. They’re brutal to bolt on later.
A practical checklist for Singapore businesses adopting AI agents
If you’re adopting AI agents for marketing, operations, or customer engagement, start with identity and auditability before you scale. Here’s a field-tested checklist you can run in a workshop with IT, security, ops, and the business owner of the agent.
1) Give agents their own identities (no more “agent-as-user”)
- Create service identities for agents, not shared staff accounts.
- Separate identities per environment: dev, staging, production.
- Tag every action with
agent_id,workflow_id, andhuman_sponsor.
2) Define the agent’s mandate like a contract
Write down:
- allowed systems (CRM, helpdesk, billing)
- allowed actions (read-only vs write)
- data boundaries (which customer segments, which fields)
- escalation rules (when to hand off to a human)
If you can’t state the mandate in one page, the scope is too big.
3) Use least privilege with real controls, not hope
Common “hope-based security” patterns:
- giving admin scopes “temporarily”
- using one token for multiple agents
- skipping approvals because it’s “just marketing data”
Replace them with:
- scoped tokens per workflow
- approval steps for irreversible actions
- monitored exceptions (and an owner for each exception)
4) Make logs usable during a crisis
Ask your team to answer this in 10 minutes:
- What did the agent change in the last 24 hours?
- Which human authorised those workflows?
- Which external systems were touched?
- Can you replay the decision path (inputs → model → tool calls)?
If that takes hours, your logging is not incident-ready.
5) Prepare for agent incidents like you prepare for outages
Run a tabletop exercise:
- Agent sends 5,000 customer emails with the wrong offer.
- Agent exports a customer list to an unapproved destination.
- Agent issues refunds beyond policy.
Document:
- who pulls the kill switch
- who communicates to customers
- how you preserve evidence (logs, prompts, tool-call traces)
- how you restore clean data
This is the operational side of accountability. It’s not optional once agents can act.
People also ask: quick answers for decision-makers
Who is responsible when an AI agent makes a decision?
The organisation deploying the agent is responsible for governance, controls, and oversight. Vendors matter, but you can’t outsource accountability for how you configure, authorise, and monitor the agent in your environment.
Can we just keep agents “read-only” to reduce risk?
Yes—read-only is a sensible first phase for many Singapore businesses. But the value of agents often comes from taking action. Plan your path from read-only to write access with staged permissions and approval gates.
What’s the single most common mistake teams make?
Letting the agent impersonate a human user. It feels convenient. It creates messy attribution, weakens security controls, and makes investigations painful.
Building responsible AI agents is a business decision, not a security project
Singapore businesses are adopting AI business tools fast—especially in sales ops, customer support, and marketing automation. The teams that win in 2026 won’t be the ones with the most agents. They’ll be the ones with agents that are auditable, permissioned, and governable.
If you’re planning to deploy AI agents this quarter, start with one uncomfortable question: If this agent causes a customer-impacting incident, can you prove exactly what it did, why it did it, and who authorised it—within the same day?
If the honest answer is “probably not,” that’s your next project.