Energy-efficient data centres are the backbone of AI in South African e-commerce. See how cooling design, PUE, and renewables shape cost and scale.

Energy-Efficient Data Centres Power AI Commerce in SA
A single modern AI server can draw more power than a small office. Now scale that to recommendation engines, fraud checks, on-site search, customer chat, real-time pricing, and a peak-season traffic spike. That’s the unglamorous truth behind AI-powered e-commerce in South Africa: the “smart” stuff only works if the infrastructure underneath it stays cool, stable, and affordable.
Most teams talk about AI like it’s just a software decision. It isn’t. Your cloud bill, your site latency, your model training cycles, and even your ability to meet ESG targets are tied to how efficiently data centres convert electricity into compute — and how efficiently they get the heat back out.
Africa Data Centres (with facilities in Johannesburg and Cape Town, plus Nairobi and Lagos) has been vocal about this reality: cooling design is the main determinant of data-centre efficiency, and efficiency is what customers end up paying for. If you’re building or scaling digital services in South Africa, that’s not trivia. It’s a competitive factor.
Data-centre efficiency is the hidden cost of AI in e-commerce
AI workloads turn electricity into heat with brutal consistency — essentially 100% of the power going in leaves as heat. That means you’re always paying twice: once to run the compute, and again to remove the heat it produces.
Data centres use a metric called Power Usage Effectiveness (PUE) to describe this overhead. A PUE of 1.3 means: for every 1.0 unit of IT power (servers, storage, networking), you spend an additional 0.3 on “everything else” (cooling, fans, pumps, power distribution). Newer halls at Africa Data Centres reportedly achieve around 1.3 PUE.
Why should an e-commerce or digital services leader care?
- AI personalisation and search aren’t “free features.” They increase compute demand, and therefore heat.
- Peak season is a thermal stress test. Black Friday and December campaigns push utilisation up, and cooling systems don’t get to “pause.”
- Energy overhead becomes a margin issue. If your infrastructure is inefficient, your unit economics get worse as you grow.
I’ve found that the teams who treat infrastructure efficiency like a finance lever (not an engineering detail) make better decisions about what AI to run, where to run it, and how to forecast cost.
Why cooling design matters more than most people think
The biggest efficiency gains come from design choices, not day-to-day operations. That’s the stance from Africa Data Centres’ regional executive Angus Hay, and it matches what you see globally: once a data hall is built, you can tweak settings — but you can’t easily redesign airflow, containment, or chiller placement.
Closed-loop, air-cooled chillers: less water drama, more predictability
A major talking point for data centres worldwide is water use. Some facilities rely on evaporative cooling, which can consume significant water volumes depending on climate and design.
Africa Data Centres describes using external, air-cooled chillers with a closed-loop water system. The practical upshot: almost no water is wasted in the process because the water circulates rather than being evaporated away.
For South African digital businesses trying to hit ESG targets (or even just avoid reputational risk), that matters. When your AI features are hosted in facilities that don’t “burn” water to stay cool, you’re in a better position to answer hard procurement questions from enterprise customers.
Shade sounds basic — and it works
One detail from their Johannesburg facility in Midrand is almost funny in its simplicity: they keep chillers under a soft-shell roof so they’re shaded.
Chillers sitting in direct sun are forced to fight the heat while they’re trying to remove heat. It’s a straightforward physics problem that too many builds ignore.
The lesson for AI-driven e-commerce isn’t “go build a roof.” It’s this: efficiency is often a stack of unsexy decisions that compound over time. That’s exactly how high-performing AI systems are built too — model improvements, data quality, caching, and inference optimisation add up.
Free cooling and temperature set points: stop running the data hall like a fridge
Running colder than necessary is one of the easiest ways to waste energy. Older data centres, as Hay puts it, can be “like fridges.” The colder you set the supply air, the harder the cooling system works.
Africa Data Centres targets a set-point around 23°C to 24°C, aiming for the midpoint of the commonly referenced ASHRAE recommended range (roughly 18°C to 30°C).
Two implications for AI infrastructure:
- AI pushes heat density up. GPU-heavy racks generate more heat per square metre, so airflow management becomes non-negotiable.
- You can’t brute-force cooling forever. Efficiency comes from controlling flow and avoiding unnecessary temperature drops.
Johannesburg’s climate is an underrated advantage
They also use free cooling: when the outside temperature is below 17°C, refrigeration units can be switched off and ambient conditions help remove heat. Johannesburg can support free cooling for around 180 days a year, contributing 5% to 10% less energy use than relying on chillers full-time.
That matters for South Africa’s AI economy because it creates a realistic path to:
- lower compute costs for local workloads,
- more predictable hosting expenses,
- better resilience planning when energy prices rise.
If you’re running AI-driven e-commerce, those percentages aren’t small — they show up in your cost per order and cost per support resolution.
Hot/cold aisle containment: airflow is the real “performance tuning”
Containment is how you stop paying to cool empty space. The principle is simple: guide cold air through the front of racks, let it absorb heat, and channel hot exhaust away so it doesn’t mix back into the cold supply.
Africa Data Centres describes operating 3MW rooms with inlet temperatures around 23°C to 24°C, using containment so the hot exhaust can leave at around 30°C.
Containment works best when it’s treated as a system, not a “nice-to-have.” That includes basics like:
- Blanking panels to cover rack gaps (otherwise cold air leaks out and bypasses equipment)
- consistent rack layouts
- avoiding “random” open areas for high-density gear
Here’s the part e-commerce leaders should internalise: thermal design is capacity. If a facility can’t control airflow properly, it can’t reliably support dense AI workloads, and you’ll hit limits sooner—either performance throttling or expensive redesigns.
ESG, renewables, and why “green compute” is becoming a sales requirement
Infrastructure sustainability is shifting from marketing to procurement. More businesses now ask vendors to prove their ESG posture, and data-centre choices show up in those conversations.
Hay breaks emissions into the familiar scopes:
- Scope 1: on-site generation (diesel generators) — emissions created directly on campus
- Scope 2: purchased electricity — emissions created by the grid supplier
- Scope 3: embedded emissions — the manufacturing and supply-chain footprint of equipment
For many South African e-commerce and digital service providers, Scope 2 is the big one because electricity is a material operating cost. Africa Data Centres notes it’s in RFP processes with renewable providers and highlights a very current South African reality: wheeling agreements can take longer than expected to translate into actual delivered renewable electrons, due to coordination between municipalities, Eskom, and commercial parties.
This is the point where I’ll take a stance: if your business plan assumes “we’ll just switch to renewables next quarter,” you’re probably underestimating the timeline. Build a roadmap that includes contracting, wheeling complexity, and interim efficiency measures.
What this means for AI-powered e-commerce and digital services in South Africa
Efficient data centres don’t just reduce carbon—they protect growth. If your AI roadmap includes personalisation, automated merchandising, real-time risk scoring, or multilingual customer support bots, you’re signing up for higher compute intensity. And compute intensity is, inevitably, heat.
A practical checklist for digital teams (not just IT teams)
If you’re choosing hosting, colocation, or cloud architectures for AI workloads, use questions like these early — before you’re locked into a platform:
- What’s the PUE of the facility where our workloads run? Ask for current figures by hall/build age, not just a brochure claim.
- How is cooling designed? Look for containment, airflow modelling practices, and how chillers are protected from heat gain.
- Is water consumed for cooling? In water-stressed contexts, this is a strategic risk, not an engineering detail.
- What’s the plan for renewable energy and wheeling? Separate “contract signed” from “electrons delivered.”
- How will AI peaks be handled? Black Friday load profiles should be part of capacity and thermal planning.
Tie cooling discipline to AI discipline
There’s a nice parallel here: the same mindset that makes cooling efficient makes AI programs successful.
- Cooling teams minimise waste by controlling airflow; AI teams minimise waste by controlling data quality and inference costs.
- Cooling teams pick a sensible set-point; AI teams pick sensible model sizes and latency targets.
- Cooling teams rely on modelling (like CFD); AI teams rely on evaluation and monitoring.
Efficiency is culture, not a one-off project.
Where this series is headed (and what you should do next)
This post sits in our series on how AI is powering e-commerce and digital services in South Africa, and it’s here to make one point stick: AI performance is inseparable from infrastructure efficiency. If your models are slow, expensive, or unreliable at scale, it’s rarely “just a data science problem.”
If you’re planning 2026 growth targets now, you’ll be making decisions in the next few weeks that affect your cost base for years: where your workloads live, how they scale, and how well they can handle peak demand without spiralling spend.
So here’s the forward-looking question worth sitting with: as your AI features grow from “experiments” into core revenue drivers, is your infrastructure plan built for efficiency—or for excuses?