Smart city AI depends on trusted climate research and data infrastructure. Here’s why weakening it slows forecasts, raises risk, and harms emergency response.

Why Smart Cities Can’t Lose Climate Data Science
A few days before Christmas 2025, the U.S. National Science Foundation announced it’s considering restructuring—and potentially dismantling—the National Center for Atmospheric Research (NCAR), the country’s premier weather and climate research hub. Emergency managers and meteorologists responded fast, and for good reason: when weather data infrastructure weakens, cities don’t just lose “science.” They lose lead time.
That matters directly to the “Mākslīgais intelekts publiskajā sektorā un viedajās pilsētās” conversation. AI in the public sector doesn’t succeed because of nicer dashboards. It succeeds because it’s fed consistent, trusted, shared data—and because the models behind that data keep improving. Break the research pipeline, and the “smart” part of smart cities becomes expensive theater.
What follows is the practical view: what NCAR-type institutions really provide to cities, why AI depends on them, what failure looks like on the ground, and how public sector leaders can protect climate-data capability even when national politics turns hostile.
NCAR isn’t “academic”—it’s operational infrastructure
NCAR’s value isn’t limited to publishing papers. It’s one of the places where forecasting gets better over time—through improved numerical models, better use of observations, and advances in how uncertainty is quantified.
Emergency managers in the source article said it plainly: if NCAR disappeared, you’d still get a forecast tomorrow. The problem is your forecasts stop improving. That’s not a theoretical loss. A “flatline” in forecasting skill means:
- More false alarms (people stop responding)
- Missed high-impact events (people don’t get time to act)
- Wider uncertainty windows (harder decisions for closures, evacuations, staging)
- Slower translation from new sensors to usable warnings
For city and regional government, weather and climate research functions like a national shared service. Most municipalities can’t fund, staff, and validate the kind of model development that underpins modern hazard forecasting. They consume it—through the National Weather Service, through regional forecasting offices, through the vendors and apps that build on those foundations.
Here’s the uncomfortable truth: AI doesn’t replace foundational science. It amplifies it. If the foundation weakens, AI scales the weakness.
Why AI in emergency management is only as good as the climate data pipeline
AI in public safety and resilience works best in three modes:
- Prediction (what’s likely, where, and when)
- Prioritization (what to do first with limited resources)
- Communication (how to get the right message to the right people)
All three depend on the same thing: stable, high-quality environmental data and models.
Prediction: AI needs calibrated forecasts, not vibes
Cities are increasingly using machine learning to improve local forecasts and impact estimates—flooded intersections, wind damage probabilities by neighborhood, heat-health risk indices by block, wildfire smoke exposure by hour.
But those tools typically start with meteorological and climate outputs: ensemble model runs, radar products, satellite retrievals, hydrologic inputs. If research slows, those upstream products stagnate.
A practical example: many modern decision systems rely on ensemble forecasting (multiple simulations to represent uncertainty). AI systems can learn from ensembles to estimate probabilities of impacts and to trigger actions at thresholds. If ensemble design and skill doesn’t improve, your AI-based trigger logic becomes more conservative (more false positives) or more fragile (missed events).
Prioritization: the “dispatch layer” depends on trustworthy hazard layers
Smart city emergency management is moving toward automated or semi-automated resource staging:
- Pre-positioning pumps before urban flash floods
- Staging utility crews for ice storms or high wind
- Re-timing traffic signals for evacuation routes
- Opening warming centers based on heat-risk projections
Those decisions only work if the hazard layers are reliable enough to justify cost. If the science pipeline weakens, the public sector’s rational response is to hesitate—and hesitation is a hidden cost that shows up in overtime, damage, and casualties.
Communication: NCAR-style social science work is part of the stack
One detail in the original reporting is easy to miss but crucial: NCAR’s work isn’t only physics and models. It also includes social science research on warning communication—how to phrase alerts, how people interpret risk, what prompts protective action.
Cities deploying AI-driven alerting (multilingual messaging, channel optimization, personalized alerts, rumour detection) need that research base. Otherwise, AI “optimizes” for clicks and deliveries, not for the outcome you actually want: people taking the right action.
The smart city myth: “We’ll just build our own models”
Most cities can’t.
It’s tempting to think a well-funded city or national digital agency can “stand up an AI model” to substitute for national climate research. I’ve found this is one of the most persistent myths in public sector innovation. Not because the teams aren’t capable—because the inputs are bigger than the organization.
To replicate what a major climate and weather research center enables, you need:
- Long-term observational archives and quality control
- Continuous model development and verification
- High-performance computing capacity and specialist staff
- Governance that supports open scientific collaboration
- Stable funding cycles that survive elections
A city can build excellent applications—risk dashboards, routing systems, automated triggers, public communications. But those applications still rest on a national and global scientific supply chain.
When that supply chain is disrupted, vendors also suffer. Many private forecasting and risk analytics companies depend on the same public research ecosystem for baseline models, methods, and trained talent. So the impact doesn’t stay inside government agencies; it spreads to the entire resilience technology market.
What actually breaks when research is “rescheduled” or scattered
The current debate isn’t only “close vs. keep.” It’s also about fragmentation: breaking an institution into pieces, moving teams, shifting responsibilities to other entities.
Fragmentation creates three predictable failures.
1) Slower iteration cycles
Forecast improvements come from iteration: hypothesis → code → test → compare → deploy. When teams are split across agencies or locations with different incentives, iteration slows.
Emergency management feels that as a lag in:
- Improved lead time for flash flooding
- Better tornado and severe storm warning performance
- Better smoke and air-quality forecasting during wildfire season
2) Loss of institutional memory
Weather and climate modeling has deep “tribal knowledge”—why a parameterization was chosen, what broke last time, how biases were corrected. When people leave or teams are reorganized, that memory leaks.
AI teams in government rarely plan for this, but they should: model governance depends on human continuity.
3) Reduced trust and adoption
Cities adopt data products when they trust them. Trust comes from transparency, stability, and a track record. If the ecosystem looks politically volatile, public sector leaders become cautious about integrating those products into critical workflows.
And once critical workflows avoid innovation, it’s hard to restart momentum.
If your city uses AI for resilience, here’s what to do now
This is the part that’s within local and national digital leaders’ control. You can’t single-handedly fix federal science politics. You can reduce your city’s exposure by treating climate and weather inputs like critical infrastructure dependencies.
Build a “data dependency register” for climate-risk AI
Answer this question clearly: What systems break if forecast skill stagnates?
Create a short dependency register for:
- Flood forecasting and pump dispatch tools
- Heat-health alerting and social services planning
- Winter storm response routing
- Wildfire smoke advisories and school closure triggers
- Infrastructure condition monitoring (roads, bridges, drainage)
For each, document:
- Primary data sources (radar, models, hydrology, satellites)
- Update frequency and latency requirements
- Minimum viable accuracy/skill assumptions
- Backup sources and manual fallback procedures
This is boring work. It also prevents operational panic later.
Demand uncertainty, not just point predictions
Procurements often ask vendors for “a forecast.” Better contracts ask for:
- Probabilistic outputs
- Confidence intervals
- Calibration reporting (how often the model is right)
- Performance breakdown by season and event type
That shifts AI from “pretty prediction” to decision support you can defend after an incident.
Keep your local observations open and high quality
Cities control a growing sensor footprint: rain gauges, stream gauges, traffic cameras (for flood detection), road weather sensors, air-quality monitors.
Two strong moves:
- Improve quality control and metadata (location, maintenance logs, calibration)
- Share data via standardized interfaces with regional partners
When national research ecosystems wobble, local observations become even more valuable—not as a replacement, but as a stabilizer for local impact modeling.
Treat research partnerships as part of resilience operations
If you’re running an AI program in a city, you should have standing relationships with:
- A university or research institute
- Your national meteorological/hydrological services
- Neighboring municipalities (shared basins, shared coastlines)
This isn’t “innovation theater.” It’s continuity planning for the intelligence layer of your city.
What policymakers should hear: climate research saves money in city budgets
One reason climate research becomes politically vulnerable is that its benefits look indirect. For cities, they’re not indirect at all.
Better forecasting skill reduces:
- Overtime hours (more targeted staffing)
- Damage to public assets (earlier protective actions)
- Disruption costs (better timing of closures and reroutes)
- Health impacts (earlier heat and smoke interventions)
Even small improvements compound. A few extra hours of lead time for an extreme rainfall event can change whether you’re rescuing people from cars or keeping them off the road in the first place.
If you want a single sentence to use in a briefing:
Climate research is the upstream investment that makes AI-driven emergency management worth deploying.
Where this fits in the “AI in public sector” story
The AI conversation in government often over-focuses on apps: chatbots, automation, document analysis. Those matter. But smart cities live or die on situational awareness—what’s happening now, what’s likely next, and how confident we are.
That situational awareness depends on a chain: observations → models → interpretation → decisions → communications. NCAR sits near the top of that chain, improving the models and methods that everyone else inherits.
If we’re serious about AI in public administration and viedās pilsētas, we should be equally serious about protecting the environmental data infrastructure that feeds it.
The next 5–10 years of urban resilience will be shaped less by who has the flashiest AI pilot and more by who keeps their data supply chain intact. When the next “once in a century” flood hits a city for the second time in a decade, leaders will wish they treated climate research capacity as core infrastructure—because that’s what it is.