Trustworthy AI & Digital Twins for Resilient Cities

MākslÄ«gais intelekts publiskajā sektorā un viedajās pilsētās••By 3L3C

Trustworthy AI and digital twins are becoming practical tools for resilient city governance. See how public sector teams can move from pilots to auditable decisions.

Public sector AIDigital twinsSmart citiesUrban resilienceE-governanceInfrastructure management
Share:

Featured image for Trustworthy AI & Digital Twins for Resilient Cities

Trustworthy AI & Digital Twins for Resilient Cities

A city can’t ā€œpivotā€ when a flood hits, when a bridge shows fatigue, or when a water main fails at 3:00 a.m. Public sector teams have to act with whatever information they have at that moment—often scattered across departments, contractors, and legacy systems. That’s why the most useful smart city conversations in late 2025 aren’t about flashy pilots. They’re about trustworthy AI and digital twins that actually hold up under pressure.

At Bentley Systems’ Year in Infrastructure (YII) 2025 in Amsterdam, a SmartCitiesWorld live podcast conversation with Bentley leaders Zubran Solaimon (Industry Strategy Director) and Dorothea Manou (Solution Manager for Cities and Urban Infrastructure) landed on a pragmatic point: digital twins and AI are leaving the ā€œnice ideaā€ phase and becoming operational tools. Not just for engineering teams—but for city managers responsible for climate resilience, capacity constraints, and public accountability.

This post sits inside our series ā€œMākslÄ«gais intelekts publiskajā sektorā un viedajās pilsētāsā€ and takes the YII 2025 themes one step further: what trustworthy AI really means for e-pārvalde, what a city-grade digital twin should contain, and how to move from demos to decisions.

Trustworthy AI in the public sector: the bar is higher (as it should be)

Trustworthy AI in smart cities means decisions are explainable, auditable, and safe enough to use for public outcomes—not just internal optimization. Private companies can A/B test their way out of mistakes. Cities can’t.

When AI influences infrastructure maintenance priorities, permit workflows, or traffic management, the city must be able to answer basic questions from auditors, elected officials, and residents:

  • Why did the model recommend this action?
  • What data did it use, and was it fit for purpose?
  • Who approved the decision, and what changed afterward?
  • What happens when the data is wrong or incomplete?

What ā€œtrustā€ looks like in day-to-day city operations

In practice, I’ve found that ā€œtrustworthyā€ becomes real when it’s translated into operational controls, not slogans:

  1. Provenance by default: Every dataset and sensor feed needs lineage—source, timestamp, quality flags, and ownership.
  2. Explainable outputs: Not every model must be fully interpretable, but cities need decision explanations (drivers, confidence, constraints) that non-data-scientists can understand.
  3. Human-in-the-loop gates: High-impact actions (closing lanes, rerouting transit, prioritizing capital spend) require defined approval points.
  4. Monitoring and drift response: Infrastructure and mobility patterns change. Models must be monitored like any other critical system.

Snippet-worthy stance: If an AI recommendation can’t survive a council meeting, it’s not ready for city operations.

Why the ā€œopen platformā€ discussion matters

The YII conversation emphasized open, interoperable platforms. That’s not vendor talk—it’s a governance issue.

A closed system can trap a city in:

  • duplicate datasets across departments,
  • limited oversight into how results are produced,
  • expensive rework when suppliers change.

Open interoperability is what makes e-pārvalde improvements stick. The goal isn’t ā€œone system to rule them all.ā€ It’s a stable backbone where data and models can be reused across services.

Digital twins: from pretty 3D models to decision engines

A city digital twin is valuable only when it connects geometry, assets, and real-world performance into a single decision context. That’s the difference between a visualization and a management tool.

The YII 2025 discussion highlighted that digital twins are being applied to real constraints: climate risks, capacity, resilience, and the skills gap. Here’s how to frame a digital twin so it actually supports urban governance.

The minimum viable ā€œcity-gradeā€ digital twin

A practical digital twin for urban infrastructure management usually needs four layers:

  1. Asset layer: roads, bridges, pipes, buildings, signals—what exists and where.
  2. Condition/performance layer: inspections, deterioration curves, maintenance history.
  3. Operational layer: live or near-live telemetry (traffic volumes, pump states, energy loads).
  4. Scenario layer: the ability to test ā€œwhat ifā€ changes (storm events, closures, demand surges, construction phasing).

If you only have layer 1, you’ve built a model. Useful, but limited. Layers 2–4 are where public value shows up.

Where AI fits inside the digital twin (and where it doesn’t)

AI belongs where it reduces uncertainty or speeds up prioritization—not where it replaces engineering judgment.

Strong use cases include:

  • Predictive maintenance: ranking assets by failure risk using condition + environment + usage.
  • Anomaly detection: flagging abnormal sensor patterns (leaks, pressure drops, unusual vibration).
  • Demand forecasting: estimating peak loads for transit, water, or power.
  • Capital planning support: simulating budget trade-offs and service impacts.

Weak use cases are the ones that try to ā€œauto-decideā€ without policy context—like optimizing traffic flow while ignoring safety, noise, or equity outcomes. Cities need policy constraints encoded into the scenario layer.

Resilience isn’t one project—it’s a portfolio of decisions

Resilient cities are built through repeatable decision cycles: detect → diagnose → decide → act → learn. Digital twins and AI support that cycle by making information timely and comparable.

The podcast touched on climate risk and capacity constraints. In December, many European cities are also in annual budgeting and procurement cycles, which makes this a good moment to connect resilience to planning mechanics.

A concrete example: stormwater and flood response planning

Consider a mid-size city managing heavier rainfall events:

  • The digital twin consolidates drainage assets, topography, historical overflow points, and planned construction zones.
  • AI models forecast likely overload locations based on rainfall intensity, soil saturation proxies, and known bottlenecks.
  • The city runs scenarios:
    • ā€œWhat if we delay this road project by 3 months?ā€
    • ā€œWhat if we prioritize maintenance at these 12 inlets?ā€
    • ā€œWhat if we add temporary storage at two hotspots?ā€
  • Outcomes are evaluated with clear metrics: reduced flood-prone intersections, fewer service disruptions, faster restoration time.

This is where AI in the public sector becomes tangible: it supports faster, documented choices—aligned with public goals.

A practical metric set cities can adopt

Resilience projects often fail because success isn’t measurable. A simple set of extractable metrics helps:

  • Time-to-detect incidents (minutes)
  • Time-to-respond (minutes/hours)
  • Service downtime (hours)
  • Asset failure recurrence (count per quarter)
  • Maintenance backlog (work orders / €)
  • Capital plan variance (planned vs actual timeline and cost)

Pick 3–5 per domain and stick to them for 12 months. Consistency beats complexity.

The hidden blocker: the skills gap (and how to design around it)

The fastest way to stall a smart city program is to assume every department will ā€œlearn AIā€ on top of their day jobs. YII’s emphasis on talent pipelines—STEM programs, community innovation, and cross-sector participation—hits a real pain point.

But cities don’t have to choose between ā€œhire a data science teamā€ and ā€œdo nothing.ā€ The workable approach is to design systems that match real staffing.

How to make AI and digital twins usable for non-specialists

Here’s what works when teams are small and overloaded:

  • Role-based experiences: planners, operators, and finance teams shouldn’t see the same interface.
  • Opinionated workflows: step-by-step processes (incident triage, maintenance prioritization, permit impact checks) beat generic dashboards.
  • Model cards and decision logs: every model needs a plain-language summary and a log of when outputs were used.
  • Training tied to tasks: ā€œHow to validate an anomaly alertā€ is better than ā€œIntro to machine learning.ā€

Community-driven innovation isn’t charity—it’s capacity building

Events like urban tech challenges and digital awards matter because they create:

  • reusable patterns (what worked elsewhere),
  • partnerships that reduce procurement risk,
  • a talent magnet effect (students and professionals see public sector problems as high-impact).

For e-pārvalde leaders, this is strategic: capacity is a resilience asset.

A realistic roadmap for public sector adoption (next 90 days)

Cities make progress when they start with one operational problem, one dataset backbone, and one accountability loop. If you’re leading digital transformation in municipal services, this is a grounded plan you can execute this quarter.

Step 1: Pick a ā€œhigh pain, high dataā€ use case

Good candidates typically have frequent incidents and existing data:

  • water leak detection and response,
  • winter road maintenance routing,
  • bridge inspection prioritization,
  • construction coordination (lane closures + transit impacts).

Step 2: Define governance before models

Write down (one page is enough):

  • data owners and update frequency,
  • who can approve AI-assisted actions,
  • retention and audit requirements,
  • what counts as ā€œacceptable error.ā€

Step 3: Build the twin around decisions, not around assets

Start with the decision you’re trying to improve:

  • ā€œWhich 20 assets get maintenance next month?ā€
  • ā€œWhich detour plan minimizes emergency response delays?ā€

Then pull in only the data needed to make that decision better.

Step 4: Prove trust with a pilot that can be audited

A good pilot produces artifacts you can defend:

  • before/after metrics,
  • decision logs,
  • model performance reports,
  • stakeholder sign-off.

That’s what turns pilots into programs.

Where this is heading in 2026: smart cities that can explain themselves

Trustworthy AI and digital twins are converging into something cities have needed for decades: a shared, evidence-based operating picture that supports both engineering and governance. The YII 2025 conversation signaled that the industry is pushing past prototypes and toward systems designed for real constraints—climate stress, tight budgets, aging assets, and limited staff.

For our ā€œMākslÄ«gais intelekts publiskajā sektorā un viedajās pilsētāsā€ series, the takeaway is straightforward: AI in e-pārvalde isn’t only about chatbots and digital forms. The bigger public value often sits in infrastructure decisions—where digital twins make trade-offs visible, and trustworthy AI makes prioritization faster and more defensible.

If you’re planning your 2026 roadmap now, ask a sharper question than ā€œShould we build a digital twin?ā€ Ask: Which decision would we defend better, faster, and more transparently if we had a trustworthy, living model of the city?