ASX’s transformation reset shows what underinvestment in resilience costs. Learn how AI supports safer prioritisation, controls, and incident response in finance.

ASX Reset: What AI-Ready Finance Transformations Do
ASX has been told to “reset” its transformation program and keep an extra $150 million in capital until it can rebuild technological and organisational resilience. That number isn’t just a balance-sheet detail—it’s a price tag on something finance leaders tend to underestimate: operational resilience is a product feature.
If you work in banking, payments, wealth, insurance, or fintech, the uncomfortable lesson is familiar. Systems that “mostly work” become the business plan… right up until they don’t. And once a market operator or major institution is in constant incident-response mode, strategy shrinks to whatever fits between outages, audits, and stakeholder pressure.
I’m going to use the ASX reset as a case study for a bigger point in our AI in Finance and FinTech series: AI doesn’t save a transformation that lacks fundamentals—but it can drastically improve how you find risk, prioritise work, and prove resilience.
The real failure mode: “closing gaps” instead of building capability
The blunt takeaway from the review is that resilience problems rarely come from one bad system. They come from a pattern: deferred upgrades, thin internal capability, and governance that optimises for short-term shareholder outcomes at the expense of long-term reliability.
The report’s language (unambitious, low appetite to innovate, years of underinvestment in tech and people) reads like a checklist of what happens when an organisation treats technology as a cost centre rather than critical infrastructure. In finance, that mindset is especially dangerous because:
- You can’t “pause the business” to replatform a core ledger or market system.
- Outages and data issues create regulatory risk, not just customer churn.
- Resilience work is invisible when it’s done well—and politically painful when it’s not.
Why finance transformations stall in the same place
Most companies get this wrong: they run transformation as a portfolio of projects, not as capability building. Projects end; capabilities compound.
In practical terms, a capability-led approach means you can answer questions like:
- Do we know our top 20 operational risks right now, with evidence?
- Can we recover critical services within agreed RTO/RPO targets?
- Is our change pipeline safer this quarter than last quarter?
- Do we have the people to run and improve the platform without vendors holding the keys?
If the honest answer is “not really,” then adding AI tools won’t fix the root cause. But once you commit to capability, AI becomes extremely useful.
AI’s best role in resilience: making risk measurable and prioritisation defensible
AI is most valuable in transformations when it does three unglamorous things: observes, correlates, and recommends. The win isn’t novelty; the win is speed and clarity.
Here’s what that looks like in a finance environment.
1) AI-assisted incident intelligence (less noise, faster root cause)
A single “incident” often produces thousands of logs, alerts, chat messages, and ticket updates. Humans are good at pattern recognition, but we’re terrible under fatigue and time pressure.
AI can:
- Cluster alerts into likely single events (deduplication + correlation)
- Summarise timelines from fragmented sources (tickets, chatops, monitoring)
- Flag suspected causal changes (deploys, config changes, dependency failures)
- Recommend next checks based on similar historical incidents
This matters because the report describes a firefighting dynamic. AI can’t prevent every failure, but it can shorten the time from symptom to cause, which is the difference between a contained issue and a reputational crisis.
2) AI for “continuous control monitoring” (stop relying on annual reviews)
Many financial institutions still operate controls like it’s 2010: periodic sampling, manual evidence collection, and spreadsheet attestations. That approach can’t keep up with modern release cycles.
AI-enabled continuous monitoring can:
- Detect anomalies in privileged access and system changes
- Identify control drift (a control that exists on paper but not in practice)
- Auto-generate audit-ready evidence packages from system telemetry
- Highlight process exceptions that correlate with incidents
A scathing review often ends with “more reporting.” The better outcome is less reporting, more instrumentation.
3) AI-driven prioritisation (fix what reduces risk, not what’s loudest)
The interim report notes ASX has faced 120+ external review reports since 2020, becoming overwhelmed, with poorly targeted work and outcomes that miss stakeholder intent. That’s not an effort problem—it’s a prioritisation and operating-model problem.
AI can help build a defensible prioritisation model by combining:
- Incident frequency and blast radius
- Service criticality and customer impact
- Tech debt indicators (age, unsupported components, fragile dependencies)
- Change failure rate and rollback frequency
- Known regulatory obligations and audit findings
The output should be simple: a ranked backlog where each item states risk reduced per dollar and per engineering week.
If you can’t express that, you’re effectively prioritising by opinion.
“Innovation culture” isn’t posters—it’s incentives, guardrails, and data
The report’s critique on lack of innovation appetite is a reminder that innovation isn’t a vibe. It’s an operating system.
In finance, innovation fails when teams are punished for change but blamed for stagnation. The fix is to create safe change:
- Smaller releases, more often
- Strong testing and environment parity
- Clear ownership of services
- Reliable rollback mechanisms
- Time allocated for debt reduction and resilience work
Where AI fits into innovation without increasing risk
AI can accelerate delivery, but only if you treat it like a power tool with a safety switch.
The pattern I’ve found works:
- Start with internal use cases (engineering productivity, risk analytics, audit evidence) before customer-facing AI.
- Use human-in-the-loop approvals for high-impact decisions.
- Put policy gates around data access, logging, and model outputs.
- Instrument everything: prompt logs, model versions, access trails.
In other words: ship AI where it reduces operational risk first, not where it creates new headlines.
A practical playbook: how finance leaders should “reset” a transformation
A reset isn’t a rebrand. It’s a redesign of what gets funded, how work is measured, and who is accountable.
Here’s a concrete approach that fits banks, market operators, super funds, insurers, and large fintech platforms.
Step 1: Reframe the objective as resilience outcomes
Define 6–10 measurable outcomes (not projects). Examples:
- 99.95% availability for defined critical services
- RTO under 60 minutes for specified trading/payment workflows
- Change failure rate under 10% (or a target trendline)
- Mean time to restore (MTTR) down 30% within 2 quarters
- End-of-life tech reduced by 50% across tier-1 systems
If you can’t define the outcomes, you’ll drift back to activity.
Step 2: Build a “resilience balance sheet” (what you owe vs what you own)
List your:
- Unsupported platforms
- Single points of failure
- Vendor black boxes
- Skills gaps (security engineering, SRE, mainframe expertise, cloud ops)
- Manual controls and evidence processes
This is uncomfortable—and that’s the point. It turns resilience into a visible financial-and-risk conversation.
Step 3: Treat data as infrastructure (especially for AI)
AI in finance collapses without trustworthy data and clear lineage. Start by standardising:
- Service catalogues and dependency maps
- Logging and observability standards
- Data classification and retention rules
- Access governance with least privilege
If you want AI to support your transformation, it needs clean signals.
Step 4: Put AI where it reduces toil and increases certainty
High-ROI “AI in finance operations” use cases include:
- Incident summarisation and post-incident analysis drafting
- Automated control evidence collection
- Change risk scoring (pre-deploy)
- Fraud and anomaly detection in operational workflows (not just transactions)
These are lead-worthy initiatives because they show fast value without a risky customer-facing rollout.
Step 5: Fix the vendor relationship: ownership can’t be outsourced
Regulated financial services can’t outsource accountability. Vendors help, but you need internal capability to:
- Challenge design decisions
- Verify controls
- Operate platforms under stress
- Run contingency procedures
If the “how” lives only in a consulting deck, resilience is temporary.
What fintech teams should learn from this (even if you’re small)
It’s easy for fintechs to read an ASX story and think, “That’s a big-institution problem.” It isn’t.
Fintechs tend to repeat the same failure pattern at startup speed:
- Rapid scaling without standardised observability
- Heavy reliance on third parties with unclear failure modes
- Security controls bolted on after growth
- AI features shipped before data governance is real
If you want to outcompete incumbents, don’t just build better UX. Build boring reliability—and use AI to keep it boring as you scale.
A strong stance: If your AI roadmap is ahead of your resilience roadmap, you’re taking the wrong risk.
Next steps: turning an ASX-style lesson into an AI-ready plan
The ASX reset is a public reminder that finance infrastructure is judged by outcomes: uptime, recovery, and trust. The extra capital requirement is one response. The better response, for every finance leader reading this, is to make resilience measurable and continuous.
If you’re planning an AI in finance program for 2026—fraud detection, credit decisioning, algorithmic trading analytics, personalised banking, or AI agents—start with the foundation: instrumentation, controls, and change safety. AI will perform better, regulators will be calmer, and your team won’t live in incident mode.
If you had to “reset” your transformation next quarter, what would you stop doing immediately—and what capability would you fund even if it never makes a flashy press release?