Smart City Strategy in 2025: Joined-Up, AI-Ready Cities

MākslÄ«gais intelekts publiskajā sektorā un viedajās pilsētās‱‱By 3L3C

Smart city strategy in 2025 is about joined-up governance. See how AI and shared data can break siloes, improve services, and build trust.

Smart CitiesPublic Sector AIData GovernanceDigital TransformationDigital InclusionUrban Analytics
Share:

Featured image for Smart City Strategy in 2025: Joined-Up, AI-Ready Cities

Smart City Strategy in 2025: Joined-Up, AI-Ready Cities

A decade ago, “smart city” often meant devices everywhere: sensors on lampposts, dashboards in control rooms, pilots that looked great in a slide deck and then quietly expired. In late 2025, most public sector leaders I speak with are less impressed by gadgets and far more interested in outcomes: faster services, safer streets, lower emissions, and a city workforce that can actually operate and govern digital systems.

That shift is exactly why the question “What is a smart city?” still matters—and why it’s hard to answer cleanly. The SmartCitiesWorld podcast episode with Chris Dymond (Unfolding; Zigurat) lands on a point that’s become unavoidable: a modern smart city strategy has to treat technology, environment, and people as one system. If you optimise one piece in isolation, you usually make another worse.

This post is part of the “Mākslīgais intelekts publiskajā sektorā un viedajās pilsētās” series, so I’ll take that joined-up framing and make it practical: how AI in the public sector can reduce siloes, improve decision-making, and support city operations—without creating a new class of “digitally excluded” residents or employees.

The 2025 reality: “smart” isn’t a tech program anymore

A smart city in 2025 is a governance capability, not a procurement category. The smartest cities aren’t the ones buying the newest kit; they’re the ones that can consistently turn data into better services, measure the impact, and adapt.

Here’s the uncomfortable truth: many “smart city” efforts failed because they were framed as IT projects instead of service redesign. If your waste collection team, transport planners, housing department, and emergency management unit all operate with different data definitions, different vendors, and different incentives, then adding AI on top won’t fix it. It will just make the fragmentation faster.

What Chris Dymond argues—implicitly and explicitly—is that the taxonomy needs to expand: you can’t talk about smart cities without talking about climate resilience, workforce capability, and digital inclusion in the same breath.

A useful working definition (that doesn’t collapse into buzzwords)

A practical definition I’ve found helpful is this:

A smart city is a city that can coordinate decisions across departments using shared data, clear accountability, and measurable outcomes—while protecting residents’ rights.

AI can strengthen that capability, but only if the basics are in place: shared data standards, operational ownership, and an agreed view of “success.”

Joined-up strategy means one operating model for people, tech, and environment

A joined-up smart city strategy starts with recognising a pattern: most city outcomes are cross-departmental by nature.

  • Reducing congestion affects air quality, public health, business productivity, and road safety.
  • Improving building efficiency affects energy demand, social equity, and climate targets.
  • Managing flood risk ties together planning, asset management, emergency response, and community communications.

If each department optimises locally, you get global mess.

What “joined-up” looks like in practice

A joined-up model usually includes three concrete moves:

  1. One outcomes framework: a short list of city outcomes with targets and owners (not 60 KPIs nobody reads).
  2. Shared data products: reusable datasets (and APIs) that multiple teams rely on—address data, asset registers, mobility data, permits, incident logs.
  3. Cross-functional governance: a mechanism to resolve conflicts (privacy vs. service quality, speed vs. procurement, innovation vs. reliability).

AI helps most when it’s aimed at these shared outcomes. It’s not “AI for transport” or “AI for housing” as separate initiatives. It’s AI for congestion reduction, AI for energy poverty, AI for safer intersections—and then departments coordinate around the outcome.

Where environment fits (and why it can’t be bolted on)

In December 2025, climate adaptation isn’t a niche agenda. Cities are dealing with heatwaves, flooding, and infrastructure stress. A smart city strategy that doesn’t treat environmental data as core operational data is outdated.

AI can support environmental goals by:

  • Forecasting: predicting flood risk hotspots or heat vulnerability using historical patterns and real-time conditions.
  • Optimising: adjusting street lighting, building controls, or traffic signals to reduce energy use and emissions.
  • Targeting: identifying which buildings or neighbourhoods deliver the highest impact from retrofits.

But those benefits require data that’s consistent, timely, and governed. Which brings us to the hardest part.

Breaking down siloes: data integration is the real “smart city” work

Siloed governance is usually blamed on culture. Culture matters, but I’m convinced the bigger issue is structure: budgets, procurement rules, incompatible systems, and unclear data ownership.

AI exposes this brutally. If your address database doesn’t match your asset register, your AI model will happily predict nonsense with high confidence.

The anti-pattern to avoid: the “dashboard city”

Many cities built beautiful dashboards that pull data from multiple systems, but they didn’t change the operating model. The result:

  • Decision-makers see the numbers, but nobody owns the action.
  • Frontline teams don’t trust the data.
  • Data teams become report factories.

A joined-up, AI-ready city flips that. It builds a few high-value data products and connects them directly to operational processes.

A practical sequence that works (even with limited capacity)

If you’re trying to make progress without boiling the ocean, I’d use this order:

  1. Pick one cross-cutting use case (e.g., “reduce missed waste pickups by 30%” or “cut permit processing time from 30 days to 10”).
  2. Map the decision chain: what decision is made, by whom, with what data, and what happens next.
  3. Fix the data at the source: standardise fields, validate inputs, remove duplicate identifiers.
  4. Add AI last: start with analytics and rules, then introduce machine learning where it clearly beats simpler methods.

That “AI last” point is not anti-innovation. It’s how you avoid fragile systems and public backlash.

Examples of AI use cases that actually reduce siloes

These are “joined-up” by design:

  • Urban mobility operations: AI-assisted traffic management that integrates roadworks schedules, public transport disruptions, and incident response.
  • Predictive maintenance for city assets: combining complaint data, inspection history, and sensor signals to prioritise repairs (roads, lighting, water infrastructure).
  • Social services triage: flagging residents at risk (with strict governance) by combining housing, health referrals, and benefits interactions—focused on earlier support, not surveillance.

The common thread: AI is valuable only when departments agree on shared definitions and shared accountability.

Upskilling without exclusion: the public sector workforce is the constraint

Chris Dymond highlights a balancing act that deserves more attention: cities must upskill staff for digital work without widening digital inclusivity gaps—both for employees and residents.

If AI becomes a layer only specialists can understand, you’ll get two outcomes:

  • Frontline workers disengage, and the “AI team” becomes a bottleneck.
  • Residents experience inconsistent services because staff can’t explain or override automated decisions.

What to train (hint: not everyone needs to code)

A sensible workforce plan separates roles from skills:

  • All staff (baseline): data literacy, privacy basics, AI limitations, how to challenge outputs.
  • Operational leads: translating service goals into measurable metrics, process mapping, vendor management.
  • Specialists: model evaluation, data engineering, security, and responsible AI assurance.

One stance I’ll take: prompting isn’t a strategy. Cities need durable capability in service design, data governance, and procurement—not just “how to use a chatbot.”

Digital inclusion: residents judge outcomes, not architecture

Digital inclusion isn’t solved by launching an app. It’s solved when residents can reliably access services through multiple channels and get help when they’re stuck.

For AI-enabled public services, that means:

  • Choice of channel: digital, phone, in-person—without penalty.
  • Plain-language explanations: what data is used, what the decision means, how to appeal.
  • Human override: staff can correct errors quickly.

If your AI improves efficiency but increases the number of residents who can’t complete a process, you’ve failed the “smart city” test.

AI governance for cities: trust is an operating requirement

Smart city discussions often treat ethics as a slide at the end. In practice, trust determines whether your system survives contact with reality.

A credible AI governance approach for local authorities includes:

1) Clear accountability for automated decisions

Someone must be responsible for outcomes—not “the model.” Put names on roles:

  • Service owner
  • Data owner
  • AI system owner (operations)
  • Risk/assurance lead

2) Minimum assurance for every AI use case

Cities don’t need perfection, but they do need consistency. A lightweight assurance checklist can include:

  • Documented purpose and expected benefit
  • Data quality checks and bias risks
  • Performance metrics (including false positives/negatives)
  • Security and access controls
  • Complaint and appeal pathway

3) Procurement that prevents lock-in and chaos

If every department buys separate AI tools, you’ll recreate siloes in a new form. Better procurement signals include:

  • Interoperability requirements (APIs, data export)
  • Shared identity and access management
  • Vendor transparency on training data and limitations
  • Support for monitoring and retraining

This is where “AI in smart cities” becomes real: not as a demo, but as a governed service capability.

A 90-day plan: one joined-up AI pilot that proves the model

If you’re a city leader trying to show progress quickly, a 90-day plan can work—if it’s scoped tightly.

Here’s a template I’d use:

  1. Select one problem with clear pain: high complaint volume, high cost, or political urgency.
  2. Define a single outcome metric: e.g., “reduce response time from 10 days to 5.”
  3. Create a shared dataset: one “source of truth” for the pilot (even if it’s not perfect).
  4. Deploy a human-in-the-loop model: AI suggests, humans decide; measure both speed and quality.
  5. Publish an internal transparency note: what data is used, how accuracy is tracked, how residents can appeal.

The goal isn’t to “roll out AI.” The goal is to prove that joined-up governance works—and can be repeated.

Where this series is heading next

The broader theme of Mākslīgais intelekts publiskajā sektorā un viedajās pilsētās is simple: AI improves e-governance services, infrastructure management, traffic flow analysis, and data-driven decision-making—but only when cities treat AI as part of service delivery, not an add-on.

A joined-up smart city strategy is the foundation that makes AI useful instead of risky. If you’re redefining “smart” for 2026 budget cycles, my advice is blunt: invest in shared data products and workforce capability first, and you’ll get better AI outcomes faster.

What would change in your city if every department agreed on one shared outcome—then built the data and AI around it?