Holographic calls demand AI-driven network optimization. See what 6G-XR proved on 5G SA—and what telcos should build next.
AI-Powered Holographic Calls: What 6G-XR Proves
A holographic call isn’t “just video, but fancier.” It’s a live 3D capture of a person (often from multiple cameras), compressed, streamed, reconstructed, and rendered in real time—while the network juggles everyone else in the cell doing normal-life December things: shopping spikes, travel days, end-of-year work calls, and event traffic.
That’s why a recent 6G-XR consortium validation matters. Capgemini, Ericsson, i2CAT Research Centre, and Vicomtech demonstrated consistent holographic calling on Ericsson standalone 5G infrastructure in Barcelona and Madrid, using proactive congestion detection and intelligent traffic prioritisation to keep the experience stable under load. It’s a very “now” result: 6G ambition, tested on 5G reality.
For this AI in Telecommunications series, here’s the point I don’t want anyone to miss: holographic communications won’t scale on raw bandwidth alone. They scale on AI-driven network intelligence—prediction, prioritisation, and adaptation across radio, edge, and application layers.
Why holographic comms break “normal” network planning
Answer first: Holographic calls stress the network in three ways at once—bitrate volatility, latency sensitivity, and compute dependence—and traditional static QoS planning can’t keep up.
A regular video call can often degrade gracefully. A holographic stream is more fragile. Small changes in throughput or jitter can cause visible artifacts, reconstruction errors, or motion weirdness that users interpret as “it’s broken,” not “quality is lower.” And unlike video, holography is tightly coupled to rendering and reconstruction compute, which makes edge and GPU availability part of the “network” experience.
The three constraints that matter most
- Latency (and consistency): It’s not only the average latency. Variance kills immersion.
- Uplink pressure: Holographic capture can be uplink-heavy, especially in enterprise scenarios (remote expert, factory inspection, training).
- Compute proximity: If the reconstruction/rendering node is far away—or overloaded—the experience suffers even if radio conditions are fine.
That combo is why the consortium’s focus on congestion detection, adaptive streaming, and traffic prioritisation is so telling. They weren’t chasing a peak speed headline. They were chasing predictability.
What the 6G-XR validation actually proved (and what it hints at)
Answer first: The demonstrations showed that network-aware holography is achievable on 5G SA, and the “secret sauce” is closed-loop intelligence: detect congestion early, adapt streams fast, and allocate resources on demand.
Based on the RSS report, Ericsson and i2CAT handled the holographic component using dedicated i2CAT technology, validating consistent holographic calling on Ericsson 5G standalone infrastructure. The standout technical choices:
- Proactive congestion detection to monitor cell performance and adapt the stream during high traffic.
- Intelligent traffic prioritisation to put holographic traffic first and pull additional resources when needed.
This maps cleanly to where telco AI is heading:
- Prediction (spot trouble before users feel it)
- Policy (decide what should happen)
- Control (actually make it happen across RAN, transport, core, and edge)
Snippet-worthy truth: For holographic communications, “good coverage” isn’t enough. You need AI that keeps quality stable when the network gets messy.
Why testing on 5G SA matters more than name-dropping 6G
This wasn’t a 6G lab fantasy. It’s important because it validates an approach that operators can start building now:
- 5G SA + edge computing as the foundation
- AI-driven observability (radio + core + edge)
- Application-level adaptability (the stream reacts to network conditions)
That last point is huge. The report notes distributed XR services using standardised APIs to adapt to the underlying compute environment and select the optimal edge node. In practice, this is the difference between “XR is a demo” and “XR is a product.”
AI is the only practical way to run XR and holography at scale
Answer first: If operators try to manage XR/holographic workloads with manual tuning and static rules, they’ll either overprovision (expensive) or underserve (churn). AI gives you a third option: dynamic, intent-based optimisation.
Think about what has to happen during a holographic call in a busy cell:
- Detect early signs of congestion (before packet loss spikes)
- Decide whether to:
- adjust codec parameters,
- shift compute to a different edge node,
- trigger prioritised scheduling,
- allocate more slice resources,
- or throttle less critical traffic
- Execute those actions quickly and safely
Humans can’t do that per session, per cell, per second. Automation can—but only if it’s informed by good signals and good models.
Where AI fits in the holographic comms stack
Here’s a practical mapping I use when talking to telecom teams evaluating AI for network optimization:
- RAN AI (near-real-time control): predict cell congestion, optimize scheduling, manage uplink bottlenecks
- Core AI (policy + session steering): decide which sessions get priority and when; enforce QoS and slice policies
- Edge AI (compute orchestration): place workloads on the best node based on latency, GPU load, and proximity
- App AI (adaptive experience): adjust reconstruction quality, frame rate, and compression based on network feedback
The 6G-XR tests touched at least three of these layers (RAN awareness, prioritisation, and edge selection). That’s the blueprint.
Proactive congestion detection: what it should become in production
“Proactive congestion detection” in a demo often starts as an algorithm. In production, it becomes an operating model:
- Signals: PRB utilization, uplink interference, queue depth, HARQ, transport latency, edge GPU utilization, session KPIs
- Models: short-horizon forecasting (seconds to minutes), anomaly detection, causal impact analysis
- Actions: policy-driven changes with guardrails (no oscillation, no runaway prioritisation)
A strong stance: the winners will be operators who treat AI as part of the network control plane, not a reporting tool. Dashboards don’t stop congestion. Controls do.
Edge discovery and “best node” selection: the underrated make-or-break
Answer first: XR and holography are as much an edge placement problem as they are a radio problem, and AI is the cleanest way to pick the right node under real-world variability.
The report highlights an edge computing demonstration where XR services select the optimal node and adapt via standardised APIs. That sounds straightforward until you try to operationalize it.
“Best node” changes constantly:
- At 09:00, Node A is closest and lightly loaded.
- At 09:10, Node A’s GPU is saturated (another team started a digital twin simulation).
- At 09:12, transport latency to Node B rises due to backhaul contention.
Static routing will fail here. AI-driven orchestration can continuously score nodes using multi-factor inputs (latency, load, cost, energy, SLA class) and steer sessions accordingly.
What telcos should standardize now
If you’re an operator or a partner building toward immersive services, standardize these pieces early:
- Edge service catalog: what compute profiles exist (CPU, GPU, NPU), where they are, and what they cost
- Service intent and tiers: “holographic call” isn’t one thing; define quality tiers with clear resource envelopes
- APIs for adaptation: apps need a supported way to receive network/edge signals and adjust safely
The consortium’s emphasis on standardized APIs is the right direction. It lowers friction for ecosystem partners and makes XR services repeatable.
Two use cases that justify investment (and how to sell them internally)
Answer first: The first profitable holographic/XR deployments will be enterprise-led—where the value of fewer errors, faster fixes, and better training outweighs the cost of premium connectivity and edge compute.
Ericsson’s R&D Spain head referenced sectors like remote industrial maintenance and advanced education where instant, high-fidelity collaboration is vital. I agree with that prioritization.
Use case 1: Remote industrial maintenance (the “expert in your pocket”)
What success looks like: A field tech streams an XR/holographic view; a remote expert annotates, guides, and verifies steps in real time.
Why AI matters:
- Predict congestion in the serving cell and preempt quality drops
- Trigger prioritisation when the session is flagged “safety critical”
- Keep latency steady by relocating reconstruction to a less loaded edge node
Internal selling point: reduced mean time to repair (MTTR) and fewer repeat visits. Even a small reduction in truck rolls can justify premium network features.
Use case 2: Advanced education and skills training
What success looks like: A trainer appears as a hologram for a small cohort; students interact, rotate objects, and receive feedback.
Why AI matters:
- Personalize experience tiers (instructor gets highest fidelity; observers get optimized streams)
- Schedule edge compute around class times
- Automate customer experience monitoring (detect when learners experience degradation)
Internal selling point: a product story that’s easier than “holograms for everyone.” Start with institutions and enterprises that pay for reliability.
What telcos should do in 2026 to be ready for holographic services
Answer first: Build the operational capabilities—AI observability, edge orchestration, and policy control—before chasing mass-market holographic calling.
Here’s a pragmatic checklist operators can act on without waiting for 6G standards to land:
-
Treat immersive traffic as a first-class KPI
- Define XR/holography-specific KPIs (motion-to-photon, jitter budget, uplink stability) and monitor them per cell/site.
-
Invest in AI for network optimization, not just analytics
- Prioritize closed-loop automation pilots: detect → decide → act → verify.
-
Productize prioritisation (carefully)
- Build an “on-demand quality” feature that can be invoked by apps or enterprise portals, with clear policy guardrails.
-
Make edge discovery real
- Maintain a live view of edge node health (latency, GPU load, failure rates) and expose it via APIs.
-
Design for “degrade gracefully”
- Work with application partners so streams adapt smoothly rather than collapsing when conditions change.
If you do these five things, you’re not betting on hype. You’re building capabilities that also improve 5G network slicing, private 5G, and even high-value consumer experiences.
The bigger story: 6G isn’t the headline—AI control is
The consortium’s work is a preview of how AI and telecom networks will converge to deliver immersive communication at scale. Holographic communications are simply the most demanding “stress test” we’ve seen in the XR category: they expose weak automation, weak edge strategy, and weak prioritisation models quickly.
If you’re following this AI in Telecommunications series because you want practical paths to revenue, here’s my take: the operators who win will be the ones who turn AI-driven network management into a sellable capability—reliability, not raw speed.
If you’re exploring where to start, begin with one enterprise-grade XR workload and build the controls around it: proactive congestion detection, application-aware prioritisation, and edge node selection. Then expand.
What would your network do today if 500 users tried holographic calling in one city district during a peak holiday week—would it adapt, or would it just break?