AI Data Centres: What Legrand’s Bet Means for SG

AI Business Tools SingaporeBy 3L3C

Legrand’s data centre push shows AI growth is an infrastructure story. Here’s what Singapore businesses should change in their AI tool and hosting decisions.

Data CentresAI InfrastructureSingapore BusinessAI Cost ManagementAI Marketing OpsEnterprise IT
Share:

Featured image for AI Data Centres: What Legrand’s Bet Means for SG

AI Data Centres: What Legrand’s Bet Means for SG

Legrand just raised its medium-term profitability target because one market is expanding fast enough to reshape the whole business: data centres. In its latest results, the French electrical and digital building infrastructure group said data centres already account for 26% of its 2025 revenue, and it sees a path to as much as 40% over time. It also nudged its five-year margin ambition to “above 20%” through 2030, up from “around 20%.” That’s not a small tweak. It’s a signal.

For Singapore businesses following our “AI Business Tools Singapore” series, this matters for a practical reason: AI adoption isn’t just about picking the right tool. It’s about whether your infrastructure can carry the load—compute, storage, network, power, cooling, resilience, compliance, cost control.

Most teams treat infrastructure as someone else’s problem—until a model rollout slows to a crawl, cloud bills spike, latency becomes customer-visible, or security reviews stall a project for weeks. Legrand’s move is a reminder that the “boring” layer (power distribution, racks, connectivity, uptime) is where AI becomes real.

Why data centre demand is rising (and why AI is the driver)

Answer first: Data centre growth is being pulled by AI because modern AI workloads consume more compute and power, push more data through networks, and demand lower latency and higher reliability than traditional enterprise apps.

Legrand CEO Benoit Coquart summed it up bluntly: “We have just scratched the surface of AI.” That line tracks with what many Singapore companies are seeing on the ground. AI projects don’t stay small for long. A pilot that runs fine on a managed cloud notebook becomes a production pipeline with:

  • real-time or near-real-time inference
  • vector databases and retrieval systems
  • frequent model updates and A/B tests
  • monitoring for drift, bias, and cost
  • heavier privacy and governance requirements

Each step increases the infrastructure footprint.

Legrand’s results add numbers to the story:

  • 2025 revenue: €9.48B (up 9.6%)
  • Adjusted operating profit: €1.96B (up 10.5%)
  • 2026 expected sales growth: 10%–15%
  • 2026 adjusted operating margin target: 20.5%–21% (post-acquisitions)

When a company that historically sells electrical solutions to buildings starts talking like a digital infrastructure firm, it tells you where budgets are moving.

What Legrand’s acquisitions reveal about where infrastructure spending is going

Answer first: The buying spree shows data centre infrastructure is consolidating around power distribution, energy efficiency, and “digital in the physical world”—the stuff that determines uptime and operating cost.

Legrand said it acquired:

  • Green4T (Brazil): data centre infrastructure specialist
  • Kratos Industries (U.S.): power distribution systems maker

It also completed seven acquisitions in 2025 across new energies and digital, adding about €500M in revenue while sustaining margins.

Here’s the part I find most telling: Coquart said the company reviews almost 400 acquisition targets on an ongoing basis, about half tied to the broader energy and digital transition. That’s not opportunistic dabbling. That’s a map of where the market thinks profit pools will sit.

The hidden constraint: power, not software

In AI conversations, everyone talks about models, prompts, and features. But in data centres, the bottleneck is often power availability and efficiency.

AI training and high-throughput inference put stress on:

  • electrical capacity and distribution
  • cooling design (and the shift toward liquid cooling in some deployments)
  • redundancy planning (N+1, 2N)
  • monitoring and automation (DCIM, telemetry, predictive maintenance)

Infrastructure companies win when they help operators reduce downtime risk and squeeze more compute per kilowatt.

For Singapore businesses, the implication is straightforward: AI cost is increasingly an infrastructure math problem—even if you don’t own a data centre.

What this means for Singapore companies adopting AI tools

Answer first: Expect AI tools to perform (and cost) very differently depending on latency, data locality, and the quality of the underlying compute stack—so infrastructure choices become business choices.

Singapore is pushing hard on becoming a trusted digital and AI hub, and regional demand (SEA + India + Australia routes, cross-border fintech, e-commerce, logistics) keeps data gravity here. But businesses still face real trade-offs:

  • Do you run AI workloads fully in the public cloud, go hybrid, or use colocation?
  • Do you need low latency for customer-facing AI (chat, recommendations), or is batch OK?
  • Are you constrained by regulatory expectations around data residency and auditability?

A practical way to think about it: “AI readiness” isn’t one score

I prefer breaking AI readiness into four checks. If one fails, the project drags.

  1. Data readiness: clean data, permissions, lineage, retention rules
  2. App readiness: APIs, event streams, observability, rollback paths
  3. Risk readiness: security, privacy, model governance, vendor risk
  4. Infrastructure readiness: capacity, cost predictability, latency, reliability

Legrand’s news lands in #4—and #4 tends to be ignored until it’s painful.

Example: AI marketing that looks great… until peak traffic

Many Singapore SMEs start with AI for marketing and customer engagement:

  • AI chat on landing pages
  • lead scoring
  • personalised product recommendations
  • automated ad creative and copy variants

If those experiences are customer-facing, latency becomes brand perception. If response times swing from 1–2 seconds to 8–10 seconds during campaigns, conversion drops. If inference costs spike during peak hours, CAC looks worse overnight.

The fix often isn’t “better prompts.” It’s infrastructure: autoscaling strategy, caching, right-sizing, using smaller models for common intents, routing complex cases to stronger models, and ensuring the data path is efficient.

How to align AI projects with the infrastructure reality (a playbook)

Answer first: You don’t need to build a data centre to benefit from data centre growth—but you do need to design AI systems like an operator: measured, costed, and resilient.

Here’s a playbook I’ve seen work for teams rolling out AI business tools in Singapore.

1) Classify workloads by latency and sensitivity

Put each AI use case into a simple matrix:

  • Real-time + sensitive data (e.g., customer support with account info)
  • Real-time + low sensitivity (e.g., product Q&A, public info)
  • Batch + sensitive (e.g., finance reconciliation, HR analytics)
  • Batch + low sensitivity (e.g., content generation)

This drives decisions about model choice, hosting, and controls.

2) Use “model routing” to cut cost without hurting quality

A common mistake: sending every request to the largest model.

A better approach:

  • Small/fast model handles common intents
  • Large model handles edge cases
  • Rule-based safeguards for regulated flows

This is infrastructure-aware AI. It keeps performance stable and bills sane.

3) Treat observability as a requirement, not a bonus

If you can’t measure it, you can’t manage it. At minimum, track:

  • cost per conversation / cost per lead
  • median and P95 latency
  • tool-call failure rate
  • hallucination rate (via sampling + human review)
  • conversion impact vs control

This is where “AI business tools” stop being a demo and start being operations.

4) Negotiate vendors like you’re buying infrastructure

Many AI tools are resold compute with a UI on top. Ask direct questions:

  • Where is inference run (region)?
  • What are the scaling limits?
  • What’s the data retention policy?
  • Is there an audit log?
  • What happens during outages?

Singapore buyers are often more rigorous on software features than on operational guarantees. Flip that.

5) Plan for a world where data centres keep growing

Legrand expects the global data centre market to keep growing at double-digit rates until the end of the decade (though at a slower pace in some periods). If that’s broadly right, you should assume:

  • more AI features will ship “on by default” in business software
  • AI compute will be a recurring line item, not a one-off project
  • governance and cost controls will become competitive advantages

“But we’re not a data centre business.” You still depend on one.

Answer first: Even if you’re an SME using SaaS tools, your AI outcomes depend on data centre economics—power costs, capacity, and resilience shape what your vendors can deliver and what you pay.

Legrand’s comment that the U.S. data centre market is “growing like hell,” with Europe lagging but catching up, is also a reminder: capacity growth is uneven across regions. That affects pricing, availability, and latency.

For Singapore firms selling regionally, the smart move is to design AI services that tolerate variability:

  • degrade gracefully (fallback responses)
  • cache frequently used outputs
  • precompute recommendations where possible
  • keep a human handoff path for critical flows

Reliability is a product feature—especially when AI is involved.

What to do next if you’re building with AI in Singapore

Legrand’s data centre bet is a corporate version of a message I repeat to clients: AI scale is mostly an engineering and operations problem, not an ideation problem. Ideas are cheap. Running them reliably is the hard part.

If you’re adopting AI for marketing, operations, or customer engagement, take one step this week: write down your top two AI use cases and estimate their infrastructure profile—real-time vs batch, expected volume, acceptable latency, data sensitivity, and cost ceiling.

That single exercise makes vendor selection cleaner, architecture decisions faster, and budget approvals less political. And it sets you up to benefit from the region’s growing AI infrastructure rather than being surprised by it.

Where do you think your biggest constraint will show up first in 2026: data quality, governance, or infrastructure capacity/cost?

Source article: https://www.channelnewsasia.com/business/legrand-bets-data-centre-growth-lifts-profit-targets-5925646

🇸🇬 AI Data Centres: What Legrand’s Bet Means for SG - Singapore | 3L3C