AI e-commerce needs efficient datacentres. Learn how cooling design, PUE, and renewable power in South Africa shape AI cost, reliability, and ESG.

AI E-commerce Needs Efficient South African Datacentres
South African e-commerce is getting hungrier for compute. Not because online stores suddenly became bigger websites, but because AI is now sitting behind the “Buy now” button—powering product recommendations, fraud checks, demand forecasting, customer support, and real-time personalisation.
Here’s the part many retailers miss: AI doesn’t just cost money in cloud bills. It costs energy. And when every watt going into a server comes out as heat, the quiet “backbone” of AI-powered retail becomes the datacentre’s ability to stay cool—efficiently.
A recent look at Africa Data Centres (ADC) highlights how modern datacentre design in South Africa is changing: closed-loop cooling that wastes almost no water, smarter temperature targets, hot/cold aisle containment, and “free cooling” for roughly 180 days a year in Johannesburg when the outside temperature cooperates. If you’re building an AI-driven e-commerce stack (or buying AI-enabled digital services), these details shape your costs, your reliability, and your ESG story.
Efficient datacentres are the hidden cost driver of AI in retail
If your AI roadmap ignores infrastructure efficiency, you’ll overpay—month after month. AI workloads (especially training and increasingly also inference at scale) push high-density compute into racks, which pushes heat up, which pushes cooling demand up. That cooling is part of what you’re paying for in colocation or cloud.
A useful mental model is simple: electricity in = heat out. Datacentres aren’t “using” energy in some abstract way; they’re converting it into heat and then paying again to remove that heat.
This is why Power Usage Effectiveness (PUE) matters to business people, not just facilities engineers. PUE is the ratio of total facility energy to IT equipment energy. A PUE of 1.3 means:
- For every 1.0 kWh used by your servers,
- The facility uses about 0.3 kWh extra to run cooling, power distribution, fans, pumps, lighting, and support systems.
For AI-heavy e-commerce, that difference is not theoretical. If you’re scaling personalisation, search, dynamic pricing, and fraud detection during peak season, a less efficient facility can turn growth into a margin problem.
Why South African e-commerce should care right now (December timing)
December retail peaks put extra stress on everything: payment fraud risk rises, support tickets spike, and inventory mistakes get expensive fast. Many businesses respond by turning up AI usage—more real-time scoring, more automated support, more forecasting runs.
Peak AI usage is also peak heat. If the infrastructure behind your AI is fragile or inefficient, you’ll feel it as:
- Slower systems at the worst time
- Higher cloud/colocation bills
- More risk in failover scenarios during load-shedding or grid instability
Cooling design: the part of AI scaling nobody budgets for
Most companies get this wrong: they treat datacentres like “landlords,” not engineering systems. ADC’s regional leadership describes mechanical engineering—especially cooling—as the real complexity in datacentre operations.
A few design choices make a disproportionate difference to AI-driven digital services.
Closed-loop cooling and why “waterless” matters
Datacentre water use is becoming a global talking point because some cooling approaches evaporate water as part of heat rejection. ADC’s approach, as described, relies heavily on external air-cooled chillers with a closed-loop water system to move heat around.
The practical implications for South African retailers and digital service providers:
- Stronger ESG positioning when your AI services are questioned on environmental impact
- Lower operational risk in water-stressed regions
- More predictable cost base (water price volatility and restrictions can become a hidden risk)
If your brand talks sustainability, your infrastructure choices can’t be hand-wavy. Your customers won’t ask about your chiller design—but enterprise procurement teams increasingly will.
Shade over chillers: simple idea, real savings
Putting chillers in direct sun makes them work harder. ADC’s JHB1 facility uses a soft-shell roof to keep chillers shaded.
This isn’t glamorous. It’s just competent engineering. But for AI workloads, competent engineering is what keeps latency stable and costs controlled.
“Free cooling” and why Johannesburg has an advantage
When outside air is cool enough, facilities can reduce or switch off parts of active refrigeration. ADC notes that when the outside temperature drops below about 17°C, refrigeration units can be turned off, and Johannesburg can see around 180 days per year suitable for this.
They estimate 5%–10% less energy use than if chillers ran continuously.
For AI e-commerce workloads, that can translate into:
- Better unit economics for always-on AI inference (recommendations, fraud scoring)
- A stronger story when comparing regions for hosting customer-facing systems
- More headroom to scale during campaign periods without cost spikes
Temperature strategy: stop running your AI like a fridge
Keeping data halls too cold is an expensive habit. ADC describes modern set-point temperatures around 23–24°C, while older facilities are often run “like fridges.”
The key point: lower set-point temperature increases energy use. Modern standards (like ASHRAE’s recommended ranges) allow warmer operation as long as airflow and containment are engineered correctly.
For AI-heavy racks (GPUs and high-end CPUs), components might operate internally at 60–70°C, and efficient cooling is about pushing the right volume of air at the right temperature through the equipment, not overcooling the entire room.
Hot aisle / cold aisle containment: boring, decisive
Containment is about making sure cold air goes through racks rather than mixing in the room. ADC highlights that even small issues like gaps in racks matter—blanking panels prevent air from leaking and bypassing equipment.
If you run an e-commerce platform team, this has a direct analogue: optimise the flow, not the headline number. Just as you don’t want data “leaking” through unnecessary hops, you don’t want air leaking through gaps.
Containment ties back to AI performance in a practical way:
- Stable temperatures reduce throttling risk under heavy inference loads
- Better airflow control reduces hotspots in high-density GPU deployments
- Operational predictability improves SLA confidence for customer-facing systems
ESG and the grid: why AI infrastructure choices affect your brand
Every AI feature you ship becomes part of your ESG footprint. Datacentres think in scopes:
- Scope 1: On-site fuel use (generators)
- Scope 2: Purchased electricity (grid emissions factor)
- Scope 3: Embodied emissions in equipment manufacturing and supply chain
Datacentres try to avoid generator runtime because that directly increases on-site emissions (scope 1). For retailers, the brand risk is straightforward: the more your peak periods depend on diesel-backed resilience, the harder your sustainability message becomes.
Renewable energy and wheeling: good plans, slow reality
South Africa’s renewable procurement market has matured quickly—many providers are eager to sell solar and wind into long-term contracts. But wheeling (getting that energy through the grid to where it’s consumed) can be slow due to coordination across councils, grid operators, and contractual structures.
For e-commerce and digital services, the stance I recommend is:
- Assume renewable sourcing will improve, but don’t build your near-term AI business case on “future green electrons.”
- Prioritise efficiency first because it pays back even before renewables are fully delivered.
Efficiency is the one ESG move that also reduces costs immediately.
What this means for AI-powered e-commerce in South Africa
Your AI strategy is only as scalable as the infrastructure it runs on. That’s not a cloud-vs-colocation argument; it’s a design-and-operations argument.
Here are practical ways to turn datacentre realities into better AI outcomes for online retail and digital services.
A buyer’s checklist for AI-ready hosting (ask these questions)
If you’re hosting AI models, personalisation engines, or high-traffic digital services, ask your provider:
- What PUE do you achieve on new halls (and what do you achieve on average)?
- Do you use closed-loop cooling, and what’s your water consumption approach?
- What’s your standard temperature set-point, and how do you manage containment?
- How many days per year do you benefit from free cooling in your location?
- What’s your generator runtime policy and testing cadence (and how do you limit scope 1 emissions)?
- Do you have renewable energy procurement contracts, and are you actually wheeling power yet?
The goal isn’t to become a mechanical engineer. It’s to avoid buying “AI capacity” that comes with avoidable overhead.
Match workloads to the right efficiency profile
Not every workload needs the same infrastructure:
- Latency-sensitive inference (search ranking, recommendations, fraud scoring) benefits most from stable, efficient facilities close to users.
- Batch training and analytics can be scheduled for off-peak windows, but still benefit from better PUE because training runs are energy intensive.
- Customer support AI (chat, email triage) is often always-on and cost-sensitive—efficiency protects margins.
If you’re a retailer working with an AI vendor, ask where the models run and how they handle scale during peak season.
Use efficiency as a product and procurement advantage
Retailers that sell into enterprise procurement (marketplaces, B2B platforms, financial services partners) increasingly face ESG questionnaires. A credible infrastructure story helps you win deals.
A simple, defensible position is:
“We reduce the cost and carbon of AI by choosing efficient, low-water datacentre design and prioritising renewable supply where available.”
That statement is operational, not performative.
The next chapter of this series: AI needs boring excellence
This post sits in our series on how AI is powering e-commerce and digital services in South Africa. A lot of the conversation focuses on models, prompts, and shiny features. I’m firmly in the other camp: boring excellence in infrastructure is what makes AI profitable.
If you’re planning AI-powered personalisation, forecasting, or fraud prevention for 2026, start mapping infrastructure decisions now—PUE, cooling design, renewable readiness, and resilience policies. Those choices show up later as either a stable platform or a constant “why is our bill so high?” problem.
If you had to pick one place to get more disciplined this quarter, would it be model experimentation or the infrastructure and cost base those models depend on?