AI in finance fails when workarounds become normal. Learn what fintech can take from NSW Health—and how to design security people will follow.

Secure AI Finance: Stop “Workarounds” Before They Spread
A NSW Health audit found something most security leaders recognise instantly: when systems get in the way of urgent work, people route around them. Clinicians reportedly saved patient data to personal devices, shared information via unsecured channels, and stayed logged in on shared computers because logging in and out was slow, painful, and constant. The audit also flagged weak or outdated cyber security plans and thin resourcing.
If you work in banking or fintech, it’s tempting to read that and think, healthcare problem. I think that’s a mistake. The underlying pattern is industry-agnostic: when “secure” and “usable” aren’t aligned, security becomes optional in practice.
This matters even more in December 2025 than it did a few years ago because AI is now woven into day-to-day financial operations—fraud detection, credit decisioning, customer communications, AML monitoring, collections, and trading analytics. AI expands the “surface area” of sensitive data and automated decisions. If workarounds become normal, your AI program inherits risk you can’t model away.
The real problem isn’t non-compliance—it’s “normalised bypass”
Normalised bypass happens when the organisation quietly accepts rule-breaking as the price of getting work done. It’s not a one-off policy breach; it’s a workflow.
In the NSW Health audit, the drivers were familiar: time pressure (“clinical urgency”), slow systems, too many passwords, older technology, and limited secure options for information-sharing. In finance, the labels change but the mechanics don’t:
- “We had to get the loan approved before cutoff.”
- “The model monitoring dashboard is too slow, I exported it.”
- “The vendor needed logs, so we emailed them.”
- “MFA keeps failing in the branch, we stayed logged in.”
Once bypass becomes routine, two things follow:
- Your control environment becomes theoretical. Policies exist, but the effective control is “whatever the workflow allows.”
- Your incident likelihood rises while detection quality drops. Shadow processes don’t generate the right logs, alerts, or audit trails.
AI makes this worse. AI systems depend on reliable data lineage, access controls, and monitoring. If staff are exporting, copying, pasting, or uploading data to get AI-related work done faster, you’re building models—and customer experiences—on top of an unstable foundation.
A blunt truth for AI leaders in fintech
If your AI program needs people to “just this once” bypass controls to meet deadlines, your AI governance isn’t mature yet.
That’s not a moral judgement. It’s an engineering and operations issue.
Why finance should treat this as an AI governance warning
Banks and fintechs run on trust, and AI raises the cost of losing it. A single breach or compliance failure isn’t just a security event—it can become a regulatory event, a customer attrition event, and a capital allocation event.
The NSW Health audit highlighted gaps that map cleanly to common weak points in AI-driven financial systems:
1) Weak plans become weak responses
The audit found districts without effective cyber security plans or response plans, and business continuity that didn’t properly consider cyber risk. Translate that into a financial context and you get a predictable failure mode:
- AI fraud models go down (or degrade) during an outage.
- Manual fallback processes kick in.
- Staff create ad-hoc spreadsheets or exports to keep queues moving.
- Those files persist, spread, and become a new ungoverned dataset.
If you’re serious about AI in finance, you need more than model documentation. You need operational resilience for AI workflows: playbooks, fallback controls, and clear authority to pause automation when integrity is in doubt.
2) “Crown jewels” aren’t just core banking anymore
The audit noted “crown jewel” systems that didn’t receive consistent monitoring. Finance has its own crown jewels—core banking, payments, customer identity, trading systems—but AI introduces new ones:
- Feature stores
- Model registries
- Vector databases (for retrieval-augmented generation)
- Prompt/response logs for customer AI assistants
- Data pipelines feeding credit, fraud, and AML
If these aren’t treated as tier-one assets (with logging, monitoring, and access governance), your AI estate becomes the easiest path to high-impact compromise.
3) Under-resourcing shows up as “security theatre”
The audit described lean cyber staffing and spend. Finance typically spends more than healthcare, but the pattern still appears in pockets—especially in fast-growing fintechs where product velocity outpaces governance.
Here’s what I’ve found: under-resourcing rarely looks like “we have no security.” It looks like security that can’t keep up with change. Controls exist, but exceptions pile up. Reviews become rubber-stamps. Logs are collected but not triaged. AI vendors are onboarded faster than data access is properly designed.
The workarounds fintech should expect (and design out)
The fastest way to reduce bypass is to predict where it will happen and remove the incentive. Based on the NSW Health findings, here are fintech equivalents that show up constantly:
1) Personal device and local storage “just to get it done”
In healthcare it was patient info saved to personal devices. In finance, it’s often:
- CSV exports of customer lists for campaign targeting
- Local copies of SAR/AML case notes
- Screenshotting dashboards and sending them in chat
- Downloaded call recordings for model training experiments
Design-out move: restrict exports by default, and provide approved “analysis sandboxes” with audited access and short-lived data.
2) Unapproved channels for data sharing
Fax/email in healthcare is a symptom of “no secure alternative.” In fintech it’s:
- Emailing documents to personal accounts to print/scan
- Sending logs to vendors without a secure transfer mechanism
- Sharing credentials in chat because access requests take days
Design-out move: give teams a secure file exchange, vendor access pattern, and an identity workflow that can grant time-bound access in hours, not weeks.
3) Staying logged in to keep queues moving
Shared machines plus slow authentication is a recipe for persistent sessions. In finance, this appears in branches, contact centres, and operations hubs.
Design-out move: combine fast sign-in (SSO, phishing-resistant MFA) with session controls that don’t punish workers: tap-in/tap-out authentication, conditional access, and rapid re-auth for high-risk actions.
A practical playbook: secure-by-design AI for financial services
The goal isn’t “more rules.” It’s fewer reasons to break them. Here’s a pragmatic set of moves that works for banks and fintechs building AI-enabled systems.
1) Make “secure workflow time” a measurable KPI
If logging in/out or accessing the right system costs too much time, bypass is rational. Measure it.
- Average time to authenticate on shared endpoints
- Time to request and receive access to a dataset or model
- Time to complete a “secure share” to a vendor
Then set targets (for example, reduce access lead time from 5 days to 1 day; reduce re-auth friction by 30%). Security teams that measure friction earn credibility—and get budget.
2) Treat data leakage controls as AI controls
AI risk management isn’t separate from cyber security. Your AI program should explicitly include:
- Data classification tied to model use cases
- DLP policies that understand regulated data types
- Prompt and response logging standards for AI assistants
- Controls for training data retention and deletion
If your AI assistant can see a document, assume it can leak the document—accidentally or maliciously.
3) Standardise monitoring for “AI crown jewels”
You don’t need perfect monitoring everywhere; you need consistent monitoring where impact is highest.
Start by defining your AI crown jewels and apply a uniform baseline:
- Centralised logs into your SOC
- Privileged access management
- Change control for models and pipelines
- Alerts for unusual export, access spikes, and permission drift
Consistency beats complexity.
4) Build a no-blame reporting loop for bypass
The NSW Health audit noted difficulty raising issues directly due to silos and culture. Finance has the same problem: security hears about bypass after it’s entrenched.
Create a lightweight mechanism:
- A dedicated channel for “this control blocks my job”
- A 48-hour response SLA from security/IT
- Monthly “top friction points” review with ops and risk
If you punish people for telling the truth, you train them to hide it.
5) Don’t let vendors become your bypass pathway
AI adoption in fintech often means more third parties: model platforms, identity vendors, contact-centre AI, KYC automation, regtech tools.
Set a hard standard:
- No production data sent to vendors via email or ad-hoc uploads
- Time-bound vendor access with logging
- Clear boundaries for model training on your data
Vendor speed should never require security shortcuts.
What this looks like in a real fintech scenario
A common situation: the fraud team wants to test a new model feature based on device telemetry and customer behaviour sequences. The approved data environment is slow to provision and the feature store doesn’t yet support the transformation.
So someone exports raw events to a local machine, builds the feature in a notebook, and shares the results in a spreadsheet. It “works,” the feature gets adopted, and now you’ve got:
- Sensitive behavioural data outside governed systems
- No clear lineage for how the feature was derived
- A model that can’t be easily audited or reproduced
- A quiet precedent that exporting is acceptable under deadline pressure
The fix isn’t telling the team to “be more careful.” The fix is building an internal path that’s faster than the shortcut: a governed sandbox, a feature engineering pattern, and a provisioning process that matches business tempo.
Lead-ready next steps: a quick self-check for your AI security posture
If you’re building AI in finance and fintech, you can use this checklist to spot “normalised bypass” before it becomes a headline:
- Can frontline teams complete core tasks without exporting sensitive data?
- Do you know where AI prompts, responses, and training datasets are stored—and who can access them?
- Are your AI-related systems monitored like tier-one assets?
- Do you have incident playbooks that include AI models and pipelines (not just infrastructure)?
- Is security friction measured and owned like uptime and latency?
If you answered “no” to two or more, I’d prioritise fixes now—before your next AI rollout, not after it.
The NSW Health audit is a reminder that people don’t “choose” insecurity; they choose the path that lets them do the job. In AI-driven financial services, the job includes decisions and actions at machine speed. That’s why secure digital infrastructure and compliance aren’t box-ticking—they’re the prerequisite for scaling AI without scaling risk.
What’s the one workflow in your organisation where you suspect bypass is already normal—and what would it take to make the secure path the easiest one?