AI Data Center Deals: What Telcos Should Do Next

AI in Cloud Computing & Data Centers••By 3L3C

Anthropic’s new AI data center deal shows why power-first planning matters. Here’s how telcos can scale AI for 5G and network ops without overruns.

AI infrastructureData centersTelecommunications5G operationsCloud strategyNetwork automation
Share:

Featured image for AI Data Center Deals: What Telcos Should Do Next

AI Data Center Deals: What Telcos Should Do Next

245 megawatts of IT load isn’t a “data center project.” It’s an industrial-scale power decision—closer to a small city’s consumption than a typical enterprise facility. That’s the headline behind Anthropic’s newest US data center partnership: Hut 8 (energy infrastructure), Fluidstack (cluster management), and Anthropic (AI workloads), with Google backing the financing for an initial 15-year, $7 billion lease tied to a Louisiana build.

If you work in telecommunications, this isn’t just Big Tech news you scroll past. It’s a bright signal about where AI capacity is going, how it’s being financed, and what “ready for AI” will mean operationally in 2026. Telcos are building AI into the network (RAN optimization, predictive maintenance, energy management) and into the business (care, sales, fraud, field ops). All of that only works when compute, power, and connectivity planning stop living in separate org charts.

This post is part of our AI in Cloud Computing & Data Centers series, where we focus on the unglamorous but decisive layer: infrastructure optimization, workload management, energy efficiency, and resource allocation. The reality? Most AI strategies fail here, not in the model lab.

Why this data center agreement matters to telecom

This deal matters because it ties compute growth directly to power and site development, then commits to a phased scale-up. For telcos, that’s the blueprint—whether you’re building private AI factories, partnering with hyperscalers, or pushing inference to edge data centers.

The announced structure targets at least 245MW and potentially up to 2,295MW of AI data center capacity for Anthropic, delivered via high-performance clusters managed by Fluidstack. The first phase includes 245MW of IT capacity supported by 330MW of utility capacity, with later tranches adding ~1,000MW and potentially ~1,050MW more across other US sites.

Here’s what telecom leaders should take from that:

  • AI scale is being planned in megawatts, not racks. If you’re only forecasting GPUs, you’re late.
  • Phasing isn’t a “nice-to-have.” It’s how you align permits, interconnects, substations, cooling, and supply chains without betting the company.
  • Financing is part of architecture. When Google backs lease payments and pass-throughs, that’s not just money—it’s a risk-sharing model and a capacity reservation strategy.

For telecom, the analog is simple: network AI programs need guaranteed capacity and predictable unit economics, or they stall after pilots.

The telecom-specific impact: AI workloads are becoming network-critical

Telcos are moving from “AI in the office” to AI in the control plane:

  • RAN parameter tuning and optimization
  • congestion prediction and traffic steering
  • anomaly detection across transport and core
  • preventive truck-roll reduction using predictive maintenance
  • energy optimization (sleep modes, cooling, site power policies)

When those workloads become operationally central, latency, availability, and cost stop being IT concerns and become network KPIs.

The real constraint isn’t GPUs—it’s power, cooling, and delivery cadence

The deal’s most important detail isn’t the brand names. It’s the alignment of power, digital infrastructure, and compute resources for energy-intensive models.

If you’ve ever tried to scale an AI platform inside a telecom operator, you’ve seen the same constraints show up fast:

  • Power availability and interconnection timelines (often 18–36 months)
  • Cooling design complexity (air vs liquid; retrofits vs greenfield)
  • Supply chain uncertainty (transformers, switchgear, networking optics)
  • Data gravity (where subscriber and network telemetry lives)

My stance: telcos should stop treating AI infrastructure as a capacity add-on to existing cloud strategy. AI changes the shape of demand. It’s bursty for training, steady for inference, and unforgiving about latency for some network control use cases.

What “245MW” means in practice for AI operations

IT megawatts translate into operational decisions telcos will recognize:

  • Utilization is everything. A 10–15% utilization gap at this scale is a budget bonfire.
  • Network fabric design becomes a cost driver. East-west traffic inside clusters can rival north-south egress.
  • Redundancy strategy affects model performance. High availability patterns (active-active, checkpointing) change how training and inference are scheduled.

For telcos running AI for network optimization, the same dynamic appears on a smaller scale: if your inference clusters sit idle between peaks, your “AI ROI” spreadsheet collapses.

What telcos should learn from the phased approach (and copy)

A phased plan is the most pragmatic way to scale AI infrastructure while avoiding two classic failures: overbuilding early or being capacity-starved later.

The partnership is described as three tranches:

  1. Initial build: 245MW IT capacity (330MW utility)
  2. Expansion option: +1,000MW
  3. Additional sites: up to +1,050MW across other locations

Telcos can mirror this with a staged AI infrastructure roadmap that matches how real deployments evolve—from narrow use cases to broad platform adoption.

Phase 1: Start with “production inference for one domain”

Pick a domain where inference can be tied to measurable KPIs:

  • call center deflection and handle time reduction
  • network fault classification and alarm suppression
  • proactive churn interventions based on behavior signals

Infrastructure pattern:

  • centralized inference with strict observability
  • clear SLOs (latency, uptime, drift detection)
  • data pipelines hardened for privacy and auditability

Phase 2: Add “network-grade” reliability and locality

Once AI touches network operations, you’ll need:

  • multi-region failover planning
  • model rollout controls (canary, shadow mode)
  • edge inference where latency or data locality demands it

Infrastructure pattern:

  • hybrid cloud plus regional data centers
  • dedicated interconnect capacity and QoS
  • standardized GPU cluster images and scheduling policies

Phase 3: Optimize unit economics and energy efficiency

This is where the AI in Cloud Computing & Data Centers theme shows up most clearly. At scale, the winners focus on boring operational knobs:

  • autoscaling with queue-based scheduling
  • model compression and distillation to cheaper inference
  • power-aware workload placement (run heavy jobs when energy is cheaper/cleaner)
  • liquid cooling where density forces it

The shift is cultural too: infrastructure teams and AI teams must share a single cost and performance model.

How this changes telecom cloud strategy for 2026

The telecom cloud conversation used to be dominated by virtualization, 5G core cloud-native builds, and edge MEC. AI adds a new requirement: compute density plus predictable energy.

Here’s the operational reality I’m seeing across the industry:

  • Central clouds are great for training and large-batch analytics.
  • Regional data centers are where most “network operations inference” wants to live.
  • Edge sites matter for latency-sensitive inference, but only if you can manage fleet complexity.

So the right question isn’t “cloud vs edge.” It’s:

“Which AI workloads must be close to the network, and which can be scheduled wherever power and cost are favorable?”

Workload placement cheat sheet (telecom edition)

  • RAN near-real-time optimization: edge/regional (latency-sensitive)
  • Core network anomaly detection: regional/central (depends on architecture)
  • Customer care copilots: central/regional (compliance and integration driven)
  • Fraud detection: central/regional (data aggregation heavy)
  • Predictive maintenance for towers: regional (field ops integration; intermittent)

If you don’t map workload placement early, you end up with expensive, fragile architectures—and a mess of exceptions no one can operate.

The lead-gen question: Are you buying capacity—or building capability?

This deal highlights a strategic fork in the road: some organizations will “rent” AI infrastructure capacity, others will build an internal capability to plan, operate, and optimize it.

For telcos, capability matters because:

  • AI is becoming a network operations competency, not a vendor feature
  • compliance and privacy requirements are tightening (especially around customer data)
  • energy costs are now directly tied to digital service margins

If you’re responsible for AI in telecom, here are practical next steps that work well in December planning season (and hold up when budgets get cut in Q1):

  1. Create an AI capacity plan in MW-equivalents and $/inference. Even if it’s rough, it forces realism.
  2. Define your “two-tier” AI platform: centralized + regional, with clear workload rules.
  3. Standardize observability: latency, cost per request, GPU utilization, model drift, and rollback time.
  4. Treat energy as a first-class KPI: power per inference, cooling overhead, and time-of-day scheduling.
  5. Negotiate partnerships like infrastructure, not software: term length, expansion options, pass-throughs, and exit clauses matter.

If you want a simple litmus test: if your AI program can’t tell you its top three cost drivers by category (compute, data movement, people/process), it’s not ready to scale.

What to watch next: the “AI infrastructure arms race” spills into telecom

Anthropic previously communicated plans to spend $50 billion on custom US data centers to compete with other AI players. Whether you like that arms-race framing or not, it signals something telecom can’t ignore: AI infrastructure is now strategic capacity, like spectrum or fiber.

Telcos won’t match Big Tech megawatts—and they shouldn’t try. But telcos do have an advantage: deep network telemetry, real-time operational loops, and the ability to place compute close to where data is created.

The operators that win in 2026 will treat AI infrastructure as a product: costed, observable, and engineered for reliability.

If you’re planning your 2026 roadmap, here’s the forward-looking question worth sitting with: which network decisions will you trust to AI, and what infrastructure guarantees are you willing to commit to make that safe?