AI can spot security bypass patterns in finance before they become breaches. Learn practical controls, workflows, and a 30-day plan to reduce risk.

When Staff Bypass Security: AI Fixes for Finance
A NSW audit found something most security teams recognise instantly: when systems slow people down, people route around them. Clinicians reportedly saved patient data to personal devices, used unsecured apps, shared data via fax or email, and stayed logged into shared computers because logging in and out repeatedly was “cumbersome and disruptive.” That’s not a “healthcare problem.” It’s an operations problem that turns into a cybersecurity problem.
For banks and fintechs, the stakes are just as high—often higher—because money moves at machine speed, and a single compromised session can trigger fraud, data loss, regulatory reporting, and customer churn. Here’s the uncomfortable truth: you can deploy every policy you want, but if day-to-day work requires bypasses, bypasses will become the norm.
This post sits inside our AI in Finance and FinTech series, where we usually talk about AI for fraud detection, credit risk, and customer analytics. This time, the focus is more basic and more urgent: AI for access control, insider risk, and real-time detection of “workarounds” before they become breaches.
What the NSW Health audit really tells security leaders
The direct lesson is simple: non-compliance is frequently a symptom of workflow design. In the NSW case, “clinical urgency” collided with slow systems, multiple logins, and complex passwords. The audit also highlighted gaps that security leaders in any regulated environment will recognise:
- Outdated or ineffective cyber security plans and lack of fit-for-purpose response planning
- Business continuity and disaster recovery plans that didn’t properly incorporate cyber risk
- Inconsistent monitoring even for “crown jewel” systems
- Lean resourcing and under-spend relative to benchmark expectations
If you’re in financial services, you can translate “clinical urgency” into:
- payments cut-off times and exception handling
- call-centre identity checks under queue pressure
- traders, operations staff, and relationship managers moving fast
- incident responders needing broad access in the middle of an event
When teams feel they’re being measured on speed (and they are), security steps that add friction become optional in practice, even if they’re mandatory on paper.
The real risk isn’t one bypass—it’s the culture that forms around it
A single person saving a file to a local drive is bad. A whole team doing it for months because “that’s how we get things done” is worse. The audit used a phrase that should make any CISO flinch: “normalisation of non-compliance.”
In finance, this normalisation shows up as:
- shared inboxes and shared credentials for “coverage”
- screenshots of customer documents in chat threads
- data exports to spreadsheets “temporarily” that become permanent
- long-lived sessions left open on shared machines in branches
Once a workaround becomes socialised, it stops feeling risky.
Why banks and fintechs see the same behaviour (just with different excuses)
The best way to predict bypass behaviour is to watch where friction meets accountability.
Friction comes from MFA prompts, password resets, slow VDI sessions, limited copy/paste, blocked uploads, conditional access challenges, or clunky privileged access tooling.
Accountability comes from SLAs, customer impact, revenue targets, and the very human desire not to be the person who “slowed everything down.”
Put them together and you get the same pattern NSW Health saw:
- A legitimate urgency arises.
- A control adds delay.
- A workaround “solves” the immediate problem.
- The workaround spreads.
December reality check: peak workloads make workarounds multiply
It’s December 2025. In finance, end-of-year is a perfect storm: holiday staffing gaps, higher transaction volumes, year-end reconciliations, and more social engineering attempts while teams are tired. This is exactly when bypass behaviour spikes—because everything feels urgent.
That timing matters because AI-based monitoring is most valuable when humans are most overloaded.
Where AI helps: detecting bypass patterns in real time
AI in cybersecurity is most useful when it does one thing well: spot patterns humans can’t reliably see across thousands of sessions, devices, and apps. The goal isn’t “AI security” as a buzzword. It’s earlier detection with fewer false alarms.
Here are practical AI-driven controls that map directly to the behaviours described in the NSW audit.
1) AI-powered user and entity behaviour analytics (UEBA)
Answer first: UEBA flags unusual behaviour—like session patterns, access sequences, and data movement—that often precede fraud or data leakage.
In a bank or fintech, UEBA can detect:
- logins at atypical times for a given role
- “impossible travel” patterns
- sudden access to systems outside the user’s normal portfolio
- repeated access denials followed by success (a sign of trial-and-error)
- unusual file downloads or data exports before logout
This matters because bypass behaviour often looks like “minor anomalies” until you connect them.
2) Session risk scoring (instead of binary allow/deny)
Answer first: Risk scoring makes access decisions adapt to context, reducing the need for workarounds.
One reason people bypass controls is that controls are rigid. AI can help shift from “blocked vs allowed” to “allowed with safeguards.” Examples:
- Step-up authentication only when risk rises (new device, new location, unusual resource)
- Shorter session lifetimes on shared machines, longer on strongly verified managed devices
- Read-only access for risky sessions (useful in incident response and operations)
This reduces friction for normal work while tightening the net around abnormal work.
3) AI-driven data loss prevention (DLP) that understands intent
Answer first: Modern DLP can classify sensitive data and detect risky exfiltration paths without forcing the business into endless exceptions.
The NSW audit described saving patient data on personal devices and uploading to unsecured apps. In finance, the equivalents are customer IDs, account statements, loan docs, and internal risk reports.
AI can improve DLP by:
- classifying documents by content (not just filenames)
- detecting copy/paste into web forms
- spotting uploads to unsanctioned cloud apps
- identifying “bulk movement” patterns typical of exfiltration
If your DLP is noisy, teams ignore it. If your DLP is accurate, teams trust it.
4) Privileged access analytics for “crown jewel” systems
Answer first: If not all critical systems are monitored equally, attackers will pick the blind spots.
The NSW audit noted that some “crown jewel” systems didn’t receive the same level of monitoring. Banks have their own crown jewels—payments rails, core banking, identity platforms, treasury systems, credit decision engines.
AI helps by correlating:
- privileged session recordings
- command sequences
- unusual privilege escalation
- access to sensitive tables or payment templates
Even better: it helps separate legitimate break-glass access from suspicious “I’m just doing my job” misuse.
Fix the workflow first: security controls people can actually live with
AI won’t save a broken workflow. It will just generate better alerts about the broken workflow.
The most effective strategy I’ve seen is: reduce bypass incentives, then use AI to catch what remains. Here’s a practical playbook for financial institutions.
Make secure behaviour the fastest path
If logging in/out is the pain point, don’t shame people—measure it.
- Map how many times per hour key roles authenticate (operations, branch staff, call-centre, incident responders)
- Reduce password complexity reliance by moving to stronger factors (device-bound credentials, phishing-resistant MFA)
- Use single sign-on and coherent session management across core systems
- Improve device performance and application latency (slow apps create “security debt”)
If the secure route is slower than the insecure route, your policy is basically a suggestion.
Treat “workarounds” as product feedback, not misconduct
Yes, some actions are negligent. But widespread bypass behaviour usually means the system design failed the user.
Create a lightweight mechanism for teams to report:
- which control they bypassed
- why they did it
- what outcome they needed
Then fix the top three workflow failures each quarter. That’s how you unwind a culture of non-compliance.
Build an “access control observability” dashboard
Security teams need the same thing product teams rely on: visibility.
Track metrics like:
- average authentication time by role
- number of step-up prompts per user/day
- session duration on shared workstations
- frequency of data exports by system
- unsanctioned app upload attempts
AI models perform better when the organisation measures the environment they operate in.
Common questions finance leaders ask (and direct answers)
“Isn’t this just an employee training problem?”
Training helps, but it’s not the core fix. If the workflow requires repeated friction under time pressure, training won’t beat habit. Redesign the workflow, then train to reinforce it.
“Will AI increase false positives and slow us down?”
Bad implementations will. Good ones reduce noise by correlating signals. The win condition is fewer alerts with higher confidence, not more dashboards.
“What should we prioritise first: fraud AI or access-control AI?”
If you’re handling sensitive financial data, you need both. But I’d start with identity and session risk because it improves fraud outcomes too. Fraud controls are weaker when identity and device trust are weak.
What to do next (a practical 30-day plan)
If you want progress in a month—not a year—do this:
- Pick three crown jewel workflows (e.g., payment release, customer ID verification, privileged production access).
- Document the top five bypasses in each workflow (from interviews and logs).
- Instrument session telemetry (authentication events, device posture, data movement).
- Deploy a narrow AI detection use case (UEBA for privileged access or anomalous exports) with a clear success metric.
- Remove one major friction point (SSO improvement, phishing-resistant MFA rollout for a pilot group, or faster shared workstation re-authentication).
Do that, and you’ll see the same thing every time: fewer workarounds, better visibility, and incident response that’s based on evidence instead of hunches.
Security failures rarely start as “attacks.” They start as “temporary exceptions.” NSW Health’s audit is a vivid reminder that when bypass becomes normal, breaches become inevitable. For banks and fintechs, AI in cybersecurity isn’t about replacing controls—it’s about making controls fit real work, and catching risky behaviour early enough to stop it.
If you’re planning your 2026 roadmap for AI in finance and fintech, here’s the question worth debating internally: which critical process in your organisation quietly depends on people breaking the rules to meet the SLA—and what would it take to make the secure path the easy path?