ASX Reset: What Resilience Really Takes in Markets

AI in Finance and FinTech••By 3L3C

ASX’s transformation reset is a warning for every bank and fintech: resilience comes first. Here’s how to adopt AI without amplifying operational risk.

ASXoperational resilienceAI governancedigital transformationfinancial market infrastructurerisk management
Share:

Featured image for ASX Reset: What Resilience Really Takes in Markets

ASX Reset: What Resilience Really Takes in Markets

ASX has been told—publicly—to reset its transformation program and set aside an extra $150 million in capital to rebuild resilience. That’s not a tech headline. It’s a market-structure headline.

If you work in banking, fintech, or capital markets, the uncomfortable lesson is simple: digital transformation isn’t a brand campaign or a multi-year roadmap slide. It’s operational survival. And the moment you add AI on top of brittle systems, you don’t get “innovation.” You get faster failure.

This post uses the ASX situation as a case study for the AI in Finance and FinTech series: what “resilient market infrastructure” actually requires, why transformation programs derail, and how to adopt AI in a way that improves reliability rather than becoming another risk.

What the ASX review is really saying (and why it matters)

The direct message from the interim inquiry is that ASX’s problems weren’t a single bad project—they were a pattern: years of underinvestment, a culture that didn’t push for excellence, and remediation that focused on “closing gaps” instead of rebuilding fundamentals.

That matters because exchanges are not like typical enterprises. An exchange is critical market infrastructure: if it stalls, thousands of downstream processes stall with it—brokerage operations, clearing and settlement workflows, liquidity provision, corporate actions, reporting, and risk management.

Three specifics from the report are especially relevant for any financial institution planning AI adoption:

  • Underinvestment compounds. Deferred upgrades in core platforms don’t stay contained; they metastasize into incident response, workarounds, and brittle integrations.
  • Stakeholder pressure can distort priorities. When shareholder outcomes dominate, resilience spending is often framed as “cost” until an incident reframes it as “existential.”
  • Consulting volume isn’t progress. The report notes ASX has faced 120+ external reviews since 2020, creating overwhelm and poorly targeted work. More reports can mean less clarity.

A transformation program that produces lots of documentation but doesn’t raise operational maturity is a liability, not a strategy.

The myth: “AI will modernise us faster”

Here’s a stance I’ll defend: AI can’t rescue a weak operating model. If your incident management is reactive, your data lineage is fuzzy, and your change controls are inconsistent, AI won’t fix that. It will amplify it.

Why? Because most high-value AI use cases in finance—fraud detection, credit decisioning, market surveillance, liquidity modelling, operational analytics—depend on three non-negotiables:

1) High-integrity data and traceability

AI systems don’t just need data. They need auditable data. In regulated environments, you must explain:

  • where data came from,
  • what transformations were applied,
  • which model version used it,
  • what decision it influenced.

When the inquiry criticises lack of root-cause analysis and a tendency to “close gaps,” that’s basically a warning about systemic traceability debt.

2) Controlled change management

AI features often ship faster than traditional platform changes. That speed is attractive—until it collides with mission-critical environments where a small change can ripple across trading, clearing, or reporting.

If the organisation is already “firefighting,” adding AI creates more moving parts:

  • prompt or policy updates,
  • model refreshes,
  • feature store changes,
  • new vendor dependencies.

3) A culture that funds resilience

The review’s cultural critique (“unambitious,” lacking appetite to innovate) is not about hype. It’s about the willingness to invest in:

  • engineering standards,
  • testing environments,
  • redundancy,
  • training and capability.

AI adoption in finance succeeds when it’s treated as an engineering discipline, not an experiment sitting outside governance.

A better framing: “Resilience is a product”

If you want a practical way to interpret the ASX reset, try this: resilience should be managed like a product with measurable outcomes, not a vague goal.

That means defining what “good” looks like and funding it like you’d fund revenue work.

The resilience metrics that actually change behaviour

For financial market operators and large financial institutions, I’ve found these metrics force real prioritisation:

  • Service availability by function, not just overall uptime (trading, reference data, reporting, reconciliation, etc.)
  • Mean time to detect (MTTD) and mean time to restore (MTTR) for critical services
  • Change failure rate (how often a release causes an incident)
  • Recovery time objective (RTO) and recovery point objective (RPO) achieved in real tests
  • Operational toil (hours spent on manual workarounds and reconciliation)

Those last two—testing and toil—are where AI can help if foundations exist.

Where AI fits: preventing “firefighting mode” from becoming permanent

The inquiry highlights a pattern many institutions recognise: too many incidents, too many remediation tasks, too many stakeholders, and not enough time to rebuild properly.

AI can help break that cycle, but only when it’s aimed at operational outcomes rather than “AI adoption” as a vanity metric.

1) AI-assisted incident triage and root-cause analysis

Used well, AI can reduce the time between “something’s wrong” and “we know what changed.” Examples that work in practice:

  • log clustering to identify novel incident signatures
  • correlation of deploy events with service degradation
  • automated summarisation of incident timelines for post-incident reviews

The win isn’t fancy dashboards. The win is fewer repeat incidents because root causes are identified and fixed.

2) Market surveillance and anomaly detection with stronger controls

AI-driven anomaly detection is valuable—but in market infrastructure it must be governed tightly:

  • clear thresholds for alerting vs escalation
  • explainable features (what triggered the alert)
  • rigorous back-testing against historical events
  • controls against model drift

If your baseline systems are underinvested, surveillance AI becomes noisy and expensive: lots of alerts, little action.

3) AI for resilience testing (the underrated use case)

This is my favourite “quiet” AI use case for financial services: test generation and scenario coverage.

AI can help teams:

  • generate test cases from incident history
  • map dependencies and propose failure scenarios
  • simulate load or data spikes

Resilience improves when testing moves from “we ran a DR exercise once” to “we validate failure modes continuously.”

The transformation trap ASX fell into (and how to avoid it)

A “reset” usually happens when transformation becomes a bundle of projects rather than a coherent operating model change.

Here are the traps I see most often in banks and market operators—and what to do instead.

Trap 1: Optimising for optics (shareholders, timelines, press)

Fix: Create a protected resilience budget with board-level visibility. If resilience spending competes with feature delivery every quarter, resilience loses until it’s too late.

Trap 2: Too many reviews, too little execution

The inquiry notes ASX has had over 120 reviews since 2020. That’s a signal of governance overload.

Fix: Convert recommendations into a single, prioritised backlog with owners and deadlines. Kill duplicates. Be ruthless.

Trap 3: “Closing gaps” instead of designing for excellence

Gap-closing is incremental; excellence is architectural.

Fix: Choose 2–3 non-negotiable target-state capabilities and fund them end-to-end:

  • modern observability (metrics, logs, traces) for critical services
  • automated change controls and release validation
  • provable DR readiness for critical workflows

Trap 4: Treating AI as a side quest

AI pilots often live in innovation teams with limited accountability.

Fix: Put AI into the same governance lane as other critical systems:

  • model risk management aligned to operational risk
  • documented fallback modes (what happens if the model is wrong or offline)
  • clear ownership of outcomes (not just “deployment”)

A practical playbook: AI-ready resilience for financial institutions

If you’re a CIO, COO, Head of Risk, or transformation leader, here’s a concrete sequence that works better than “big bang” programs.

  1. Stabilise the core: fix incident response, dependency mapping, and critical-path monitoring.
  2. Instrument everything: observability before optimisation. If you can’t see it, you can’t improve it.
  3. Automate change validation: build release gates for performance, security, and regression.
  4. Add AI where it reduces risk first: triage, testing, anomaly detection, reconciliation.
  5. Only then scale AI to differentiating use cases: personalisation, predictive insights, advanced analytics.

A line I use internally: AI belongs on top of strong plumbing, not instead of it.

What to do next if your transformation is wobbling

ASX’s reset is a reminder that in financial markets, resilience is not negotiable—and it’s not something you can “catch up on” in a quarter. If your organisation is pushing hard on AI in finance and fintech initiatives, treat this moment as a prompt to check your foundations.

Start with one honest question: If we doubled our rate of change next quarter, would our reliability improve or degrade? If the answer is “degrade,” your AI roadmap is ahead of your operating model.

If you want help pressure-testing your transformation plan—especially the risk and resilience implications of AI adoption—I’m happy to share what a strong “AI-ready resilience” assessment looks like and which metrics to put in front of your board first.