Government IT modernisation budgets show where AI in finance succeeds: core systems, data and resilience. Use these lessons to scale AI safely.

Gov IT modernisation: What fintech can learn now
Australia’s mid-year budget update quietly delivered a loud message: core systems matter again. In the latest MYEFO allocations, multiple federal agencies received dedicated funding for IT modernisation—$60.9m for the ATO to progress an ERP modernisation business case, $44.1m for the AER over five years (plus $7.1m ongoing), and $137.3m for Veterans’ Affairs for ICT, digital and data enhancements. The AFP’s funding is redacted, which tells you it’s not small—and not optional.
If you work in AI in finance and fintech, this isn’t a “government story.” It’s a signal that the next phase of AI adoption won’t be won by flashy prototypes. It’ll be won by organisations that fix the plumbing: data models, identity, integration, controls, and operational resilience.
I’ve found that teams underestimate how quickly “AI ambition” turns into “systems reality.” You can’t run reliable fraud detection, automated credit decisioning, or real-time risk analytics on brittle back-office platforms and fragmented data.
What these IT modernisation budgets really signal
Answer first: These allocations show that large institutions are treating digital transformation as infrastructure—like roads, not renovations—and that mindset is spreading across sectors, including finance.
The MYEFO funding referenced in the source article isn’t framed as an AI spend (the separate $225m AI package is the headline elsewhere). Instead, it’s a set of multi-year technology investments aimed at core capability: ERP, data and digital systems, and essential ICT upgrades.
That’s exactly where most AI programs succeed or fail.
Modernisation is being funded as multi-year capability, not a project
The ATO’s $60.9m over two years is explicitly to build a “second pass” business case for ERP modernisation. Translation: the government is paying to get the design, scope, controls, and ROI right before it spends bigger.
Fintech leaders should take the hint. If you’re rolling out AI in finance—say, AI-powered underwriting or transaction monitoring—treat the early phase as capability design, not feature delivery. Your “second pass” is often where you discover:
- Your event data is inconsistent across channels
- Your identity layer can’t support risk-based authentication
- Your model monitoring can’t meet audit requirements
- Your vendor contracts don’t cover retraining, drift, or incident response
Regulators are modernising too, and that changes the bar
The AER’s funding includes upgrading “core data and digital systems” and explicitly aims to reduce regulatory burden. That’s not just operational efficiency—it’s a foundation for faster reporting, better analytics, and potentially more automated compliance interactions.
Finance teams should assume the same trajectory: digitised oversight. As regulators improve their own data capability, they’ll expect cleaner submissions, tighter lineage, and shorter turnaround.
For banks and fintechs, this shifts “compliance” from a quarterly scramble to an always-on operating model.
Why core systems matter more than your next AI use case
Answer first: AI in finance delivers measurable value only when you can trust the inputs, trace the decisions, and operate the models safely—none of which is possible with outdated core platforms.
It’s tempting to chase the highest-visibility wins: an AI assistant for customer service, a GenAI workflow for analysts, a smarter collections model. These can work, but the biggest risk is building AI on top of brittle foundations.
Here’s what government agencies are effectively paying for—and what it maps to in fintech.
ERP modernisation is really data governance modernisation
ERP sounds boring until you connect it to financial reality. ERP touches:
- Financial controls and reconciliations
- Procurement and vendor risk
- Workforce and access provisioning
- Audit trails and segregation of duties
When the ATO funds ERP modernisation planning, it’s funding the ability to run a more coherent, controlled operation.
In fintech terms: if you’re scaling, your “ERP moment” arrives fast. AI-powered finance operations (forecasting, anomaly detection in spend, automated close) require consistent ledgers, clean master data, and reliable integrations.
Snippet-worthy truth: AI can’t fix messy financial controls—it just automates the mess.
“Core data and digital systems” is code for analytics readiness
The AER’s focus on core data and digital systems points to a familiar pattern: modernising how data is captured, stored, governed, and delivered.
For AI-driven finance, analytics readiness means:
- A consistent customer/entity resolution layer
- Real-time or near real-time event pipelines
- Data quality SLAs (yes, SLAs for data)
- Model risk management workflows
If you’re running fraud detection, the gap is usually not “model accuracy.” It’s missing labels, delayed signals, and incomplete device or identity context.
Essential ICT upgrades are actually operational resilience upgrades
Veterans’ Affairs received $137.3m over three years for ICT, digital and data enhancements. That’s a big number tied to a simple point: services that citizens depend on can’t be built on decaying infrastructure.
Finance is in the same category. Payments outages, identity failures, and delayed incident response cost real money and trust.
If you’re building AI in fintech, resilience needs to be designed in:
- Fail-safe decisioning (what happens when the model service is down?)
- Rollback paths for model deployments
- Human-in-the-loop queues for edge cases
- Clear incident playbooks for model or data issues
The AI angle: modernisation is how you scale safely
Answer first: Modernisation makes AI deployable at scale by enabling auditability, security, and repeatable operations—especially in regulated finance.
When MYEFO funds ICT capability, it’s also funding the prerequisites for safe automation: identity, logging, governance, and secure access.
Identity and authentication are becoming the centre of gravity
The broader MYEFO list includes funding connected to authentication services in health, plus investment in emergency communications resilience. Different domain, same lesson: identity and availability are national priorities.
Fintech parallels are obvious:
- Step-up authentication based on risk signals
- Device intelligence and behavioural biometrics
- Continuous authentication for high-risk sessions
- Stronger controls around privileged access
AI makes identity harder, not easier. Deepfakes, synthetic identities, and automated social engineering raise the stakes. If your identity stack is outdated, AI-enabled fraud will find you.
Data enhancements aren’t “nice to have” anymore
Both the AER and Veterans’ Affairs funding references data improvements directly. That’s the shift: data is no longer treated as exhaust—it’s treated as an asset that must be maintained.
For AI in finance and fintech, this is where high-performing teams invest:
- A shared feature store or reusable risk signals
- Data lineage from source to model input to decision
- Continuous monitoring for drift and bias
- Controlled access to sensitive attributes
A practical rule: if you can’t explain a decision to a regulator, you’re not ready to automate it.
A practical playbook for fintech leaders (and bank teams)
Answer first: Use government-style funding logic—multi-year capability building, strict business cases, and risk controls—to make AI investments pay off.
This is the part most companies get wrong: they build an AI roadmap without a modernisation roadmap. You need both, and they should be linked.
1) Build your “second pass business case” for AI platforms
Before scaling AI use cases, document the platform requirements:
- What data domains must be standardised (customer, account, transaction)?
- What identity signals are required (device, session, biometric, KYC refresh)?
- What controls are mandatory (logging, approvals, segregation of duties)?
- What is the operating model (who owns drift, incidents, retraining)?
If you can’t answer these, the next AI project will become an integration project.
2) Prioritise modernisation that reduces risk and cost simultaneously
Government portfolios often fund work that reduces burden. Finance teams should do the same. High-ROI modernisation moves include:
- Consolidating duplicate data pipelines feeding fraud and AML
- Standardising event schemas across web/app/contact centre
- Automating evidence collection for compliance reporting
- Replacing brittle batch jobs with streaming where it matters
The payoff is compounding: cleaner data improves every model.
3) Treat model operations like payments operations
If your AI decisions affect money movement, credit limits, or account access, operate models like critical infrastructure:
- 24/7 monitoring for latency and failure
- Canary releases and rollback policies
- Audit logs that can be reconstructed months later
- Periodic stress tests (including adversarial/fraud scenarios)
This is how you prevent “AI incidents” from becoming brand incidents.
4) Assume your regulators are upgrading, and prepare for it
As agencies modernise their digital systems, the compliance bar rises. Prepare by tightening:
- Data lineage and documentation
- Explainability approaches for credit and risk models
- Fairness testing and governance workflows
- Vendor risk management for AI providers
If you’re a fintech selling into banks, this is also your sales edge: you can prove you’re audit-ready.
Where this heads in 2026: fewer pilots, more infrastructure
The MYEFO story is a reminder that serious organisations spend on the boring parts first. ERP, core data systems, ICT resilience—this is where AI either becomes reliable or remains a demo.
For AI in finance and fintech, 2026 is likely to reward teams that connect three dots: modern core systems, strong identity, and governed analytics. If you’re planning your next quarter, I’d argue your highest-return AI investment might be a data contract, an identity upgrade, or a model monitoring pipeline—not a new chatbot.
If you’re mapping your 2026 roadmap and you want AI use cases to convert into production wins (fraud detection, credit scoring, automated risk analytics), start with a blunt question: What part of our stack would fail a regulator’s first serious audit? Fix that first, then scale.