Energy-efficient datacentres make AI e-commerce affordable in SA. Learn how cooling design, PUE and renewables choices impact AI scale and margins.

AI E-commerce in SA Needs Cooler, Greener Datacentres
South African e-commerce teams talk a lot about models, personalisation, and automation. But if you’re running AI-driven recommendations, search, fraud detection, chatbots, or even just heavy analytics, your real constraint is usually far less glamorous: how efficiently your datacentre can turn electricity into compute without turning it into a heat problem you can’t afford.
Here’s the blunt reality I’ve seen across digital businesses: when power is expensive and supply is fragile, efficiency isn’t “nice to have” ESG theatre — it’s the difference between scaling AI and shelving it. And the least understood part of efficiency is cooling.
A recent look inside Africa Data Centres’ JHB1 facility in Midrand highlights why this matters for the broader “How AI Is Powering E-commerce and Digital Services in South Africa” story. AI adoption is accelerating, but the infrastructure underneath it has to get smarter, cooler, and more predictable.
Datacentre efficiency is an AI growth strategy, not a facilities issue
If your AI roadmap assumes unlimited compute, your datacentre bill will correct you quickly. AI workloads push higher rack densities, longer peak usage windows, and more heat per square metre. That’s true whether you’re training models or simply serving them at scale.
For South African online retailers and digital service providers, that translates into very practical questions:
- Can we afford to serve personalised product feeds during December peak?
- Can we keep response times low for customer support chatbots when traffic spikes?
- Can we run fraud scoring in real time without throttling?
All of these are compute-and-heat problems masquerading as software features.
The part most teams miss: 100% of IT power becomes heat
A line that should be printed on every AI strategy deck: every watt you feed into electronics comes out as heat. The “work” the CPU/GPU does is effectively indistinguishable from heat at the datacentre level.
So your AI cost base is never just:
- IT power (servers, GPUs, storage)
It’s also:
- The extra energy required to remove the heat
That’s why Power Usage Efficiency (PUE) is such a big deal.
PUE explained in plain language (and why 1.3 matters)
PUE is the ratio of total facility power to IT equipment power. A PUE of 1.3 means that for every 1.0 unit of compute power, you spend an additional 0.3 units running the facility overheads (primarily cooling, plus power conversion, lighting, etc.).
Africa Data Centres reports achieving around 1.3 PUE on new builds. That’s meaningful in a market where energy pricing and continuity are constant board-level topics.
Here’s the practical business implication:
Improving PUE is one of the fastest ways to lower the per-transaction cost of AI in e-commerce.
If your recommendation engine, search ranking, and customer segmentation are running 24/7, shaving overhead power is like negotiating a permanent discount on your cloud or colocation bill.
Why older datacentres struggle with AI-heavy workloads
Older facilities often behave “like fridges”: low temperature set-points, poor airflow management, and legacy equipment that doesn’t cool itself efficiently. That approach was already expensive; AI makes it worse because:
- GPU servers run hot (and stay hot)
- Rack density climbs
- Cooling inefficiencies compound quickly
If you’re planning an AI rollout for 2026 budgets, you should treat datacentre age and cooling design as first-class procurement criteria, not footnotes.
Cooling design is where efficiency is won (or lost)
Most of the efficiency gains in modern datacentres come from cooling system design. Operators can optimise operations at the edges, but design decisions set the ceiling.
Africa Data Centres’ approach illustrates three tactics that directly support AI scale in South Africa.
1) Closed-loop, air-cooled chillers: less water drama, more predictability
A global criticism of datacentres is water consumption, especially where evaporative cooling is used. ADC’s facilities use external, air-cooled chillers with a closed-loop water system, which means almost no water is wasted.
That matters locally for two reasons:
- ESG scrutiny is rising (from enterprise customers and regulators)
- Water constraints aren’t theoretical in many regions
For digital services brands, this becomes a vendor selection issue: if you’re selling to corporates with sustainability reporting, your infrastructure footprint ends up in their questions.
2) Shade the chillers: the simplest “efficiency hack” is common sense
Putting cooling equipment in direct sun is an avoidable self-own. ADC shades its chillers (in JHB1, with a soft-shell roof), because heat load from the sun reduces chiller efficiency.
It sounds obvious, yet many sites still leave chillers exposed. The cost shows up later as higher ongoing energy spend.
For AI-driven e-commerce operations, you feel this indirectly as:
- higher cost per model inference
- more expensive peak season scaling
- less headroom to add new AI features
3) Free cooling: Johannesburg’s climate is an advantage
Free cooling means turning off refrigeration when outside temperatures are low enough. ADC uses free cooling when the outside temperature is below 17°C.
Johannesburg can support free cooling for roughly 180 days a year, delivering about 5% to 10% energy savings versus running chillers continuously.
This is the kind of “quiet advantage” that makes local infrastructure strategy matter. If you’re an SA-based retailer choosing where to host latency-sensitive AI services, climate-linked efficiency is part of the ROI.
Temperature set-points: stop treating datacentres like fridges
Running colder than necessary is one of the most expensive habits in infrastructure. ADC targets a set-point around 23–24°C, aiming for the midpoint of the ASHRAE recommended range (roughly 18–30°C).
The logic is simple:
- Lower set-point temperature = more energy required
- Higher set-point temperature = risk of insufficient cooling
So the goal is the highest safe temperature, not the lowest possible one.
Why this matters specifically for GPUs
GPUs and CPUs typically operate at 60–70°C internally. Cooling isn’t about making the room cold; it’s about efficiently delivering cool air to where heat is generated, then removing hot air without mixing.
That leads to the next critical factor: containment.
Hot and cold aisle containment: the unglamorous hero of AI uptime
Containment forces air to do useful work. Cold air should go through racks, not leak into open space. Hot exhaust should be captured and directed back to the return path.
ADC runs large rooms (e.g., multi-megawatt halls) with containment so inlet air is around 23–24°C, while exhaust can be around 30°C.
This isn’t just a facilities best practice; it has direct outcomes for AI services:
- more consistent performance under load
- fewer thermal throttling incidents
- better predictability for capacity planning
The detail that breaks containment: gaps in racks
Containment fails when racks have gaps and missing blanking panels. Air takes the path of least resistance.
If you’re a business deploying your own cages or private suites in a colocation environment, make sure your ops checklist includes:
- blanking panels installed for unused rack units
- cable management that doesn’t block airflow
- agreed containment standards with the provider
Most companies get this wrong because it feels like “someone else’s problem” until the first heat-related incident.
Sustainability and resilience: why AI services will be judged by infrastructure choices
AI-powered digital services are increasingly sold on trust: trust in data handling, trust in uptime, trust in responsible operations. Datacentre decisions sit underneath all three.
Africa Data Centres frames its sustainability reporting in the familiar ESG scopes:
- Scope 1: on-site emissions (for example, running diesel generators)
- Scope 2: emissions from purchased electricity
- Scope 3: supply-chain emissions embedded in equipment manufacturing
For e-commerce and digital services leaders, here’s the practical translation:
- If your provider relies heavily on generators, your resilience story becomes an emissions story.
- If you’re selling to enterprise customers, you’ll be asked for reporting data.
- If you want long-term cost stability, renewable procurement starts to matter.
Renewables in South Africa: contracts are easier than electrons
The market is shifting fast. Renewable energy providers are actively looking for buyers, and datacentres are natural anchor customers.
But there’s a gap between signing contracts and actually wheeling power through the grid due to coordination across parties (utilities, municipalities, grid operators). If you’re buying AI hosting, ask a pointed question:
Are you currently wheeling renewable electrons, or only contracted to do so?
It’s not a “gotcha”; it’s due diligence.
What e-commerce and digital service teams should do next
You don’t need to become a mechanical engineer to make better AI infrastructure decisions. You do need a sharper checklist when you evaluate hosting for AI workloads.
Here’s what works in practice.
A procurement checklist for AI-ready, energy-efficient hosting
- Ask for PUE by hall/build age, not a vague average.
- Confirm containment standards (and how they enforce blanking panels and airflow discipline).
- Understand set-point strategy (running at 23–24°C is a sign of confidence in design, not neglect).
- Ask how free cooling is used and what percentage of the year it’s available.
- Clarify water usage (closed-loop vs evaporative approaches).
- Get a realistic renewables status (contracted vs wheeling vs on-site generation).
Tie infrastructure metrics to customer metrics
Your customers don’t care about chillers. They do care about:
- fast search and product discovery
- accurate recommendations
- fewer false fraud declines
- responsive support channels
A good internal habit is to link infrastructure efficiency to business outcomes, like:
- cost per 1,000 recommendations served
- cost per chatbot conversation resolved
- latency at peak traffic (Black Friday through January sales)
Once you do that, “cooling design” stops sounding like facilities trivia and starts reading like margin protection.
Where this fits in the AI-for-e-commerce story in South Africa
This topic series focuses on how AI is powering e-commerce and digital services in South Africa — content generation, marketing automation, customer engagement, and smarter operations. Those wins only hold if the infrastructure underneath them can scale responsibly.
Energy-efficient datacentres are part of that foundation. When a provider can hit around 1.3 PUE, reduce water waste through closed-loop cooling, and exploit free cooling for ~180 days in Johannesburg, you get a better platform for the AI features customers actually notice.
If you’re planning next year’s AI initiatives, treat your hosting and colocation decisions as product decisions. Because they are. What would your roadmap look like if every new AI feature had to “pay for” its heat?