Global AI data centres are scaling fast. Here’s what Singapore businesses should do now to control cost, improve governance, and adopt AI tools faster.

AI Data Centres Are Scaling Fast—What SG Firms Do Now
A 240-megawatt (MW) AI data centre is not a normal “IT expansion”. It’s an industrial-scale bet that demand for GPUs—and the businesses built on them—will keep climbing.
That’s what Nebius, an Amsterdam-based AI cloud services firm, is doing with its newly announced plan to redevelop a former Bridgestone tyre plant in Béthune near Lille, France into one of Europe’s largest AI-focused data centres. According to the Reuters report carried by CNA, the first phase is expected online by late summer 2026, with roughly half the site operational by end-2026. Development cost estimates from CBRE for AI data centres sit around US$10–14 million per MW, implying a ballpark of US$2.4–3.6 billion for 240MW.
If you run a business in Singapore, this matters for a practical reason: AI capability is increasingly constrained (or enabled) by compute supply. When global players spend billions to secure capacity, it affects pricing, availability, latency, data residency options, and the speed at which new AI services reach your team.
This article is part of the AI Business Tools Singapore series—focused on how Singapore companies adopt AI for marketing, operations, and customer engagement. The goal here isn’t to admire a big European build. It’s to translate what it signals into decisions you can make this quarter.
What Nebius’ 240MW build really signals (beyond France)
Answer first: A 240MW build signals that AI is moving from “pilot projects” to production workloads at national and continental scale, and that compute is becoming a strategic supply chain.
Nebius is part of a wave of “neocloud” providers (the article mentions Coreweave in the US) that became prominent by supplying AI infrastructure—often via large deals with hyperscalers. In Nebius’ case, Reuters notes high-profile agreements including a US$17 billion deal with Microsoft and a US$3 billion deal with Meta. These partnerships do two things at once:
- They validate demand: hyperscalers don’t commit billions unless they see sustained usage.
- They finance capacity: large contracts help fund the expensive and continuous build-out of GPU-heavy infrastructure.
Nebius also said it currently serves many startups and AI-first firms, and digital-heavy customers like Mistral, Shopify, and ServiceNow. That customer list is a clue: AI infrastructure demand isn’t limited to “AI companies”. It’s coming from firms that need AI embedded into products, service operations, and internal workflows.
Why 240MW is a big deal
Most business leaders hear “megawatt” and tune out. Don’t. In data centres, MW is a proxy for how much compute (and heat) you can run.
A large AI data centre isn’t like the cloud of 2015. It’s built around:
- GPU clusters (and the networking to connect them)
- power delivery and redundancy
- cooling architecture tuned for high-density racks
The bigger point: when a provider plans for 240MW, they’re planning for sustained demand from customers running training, fine-tuning, and high-volume inference—workloads that quickly turn “AI budget” into “AI power bill”.
The real takeaway for Singapore businesses: compute is becoming a pricing lever
Answer first: As AI infrastructure demand rises globally, Singapore companies should assume AI service pricing and availability will fluctuate, and plan procurement and architecture accordingly.
When supply is tight, three things happen that hit day-to-day business adoption:
- Unit costs become unpredictable (especially for peak GPU instances)
- Quotas and capacity reservations become common (you may not get what you want when you want it)
- Vendor lock-in risk increases (teams build around whatever capacity they can secure)
I’ve found that most companies in Singapore don’t fail at AI because the model is “not smart enough”. They fail because:
- they can’t keep costs stable,
- they can’t move sensitive data safely,
- or they can’t get production reliability.
Large-scale builds like Nebius’ are a sign that the market is moving toward more capacity, but also toward more sophisticated buyers—buyers who negotiate commitments, governance, and performance guarantees.
What this means for marketing, ops, and customer engagement
You don’t need your own data centre. You do need to treat AI like a core production dependency.
- Marketing teams: content generation is the easy part. The high-value work is audience segmentation, creative testing, and personalization at scale—often driven by embeddings, retrieval, and inference pipelines.
- Operations teams: AI copilots for SOP search, ticket triage, forecasting, and anomaly detection live or die on data quality and stable inference performance.
- Customer engagement: chat and voice automation require low latency, consistent answers, and tight guardrails—especially in regulated contexts.
If global infrastructure keeps scaling, Singapore companies will have more options. But the winners will be the ones who set up their tooling so they can switch providers, control cost, and maintain compliance.
How to “translate” global AI infrastructure into local advantage (Singapore playbook)
Answer first: Use the global infrastructure boom as a prompt to upgrade your AI operating model—procurement, architecture, governance, and measurement—so you can adopt AI tools faster without cost surprises.
Here’s a practical playbook that fits most SME-to-midmarket teams, plus many enterprise units.
1) Buy outcomes, not GPUs
Most Singapore businesses don’t need to think in “A100 vs H100 vs next-gen.” They need to buy outcomes such as:
- Cost per resolved ticket (customer support)
- Cost per qualified lead (marketing)
- Time to produce a compliant first draft (legal/comms)
- Reduction in manual reconciliation hours (finance/ops)
Then map those outcomes to a compute strategy:
- API-first for fast deployment
- reserved capacity or committed spend when usage is predictable
- multi-model routing (choose cheaper models for easy tasks)
A memorable one-liner that holds up in procurement meetings: “If you can’t measure it per transaction, you can’t control it at scale.”
2) Design for portability from day one
Compute supply cycles change. Your architecture should survive them.
A portability-minded setup usually includes:
- A thin model abstraction layer (so you can swap LLM providers)
- Centralized prompt/version control
- Standard evaluation harnesses (quality tests that run weekly)
- A RAG layer that’s vendor-neutral (your knowledge base and embeddings strategy matter)
When Nebius says it will buy Nvidia chips “shortly before they go into use,” it’s a reminder that hardware availability is dynamic. Your software should be, too.
3) Treat latency and data residency as product requirements
Europe’s build-out is partly about having a regional footprint as more firms adopt AI. For Singapore teams, the parallel is simple: where compute sits changes performance and compliance posture.
If you’re in finance, healthcare, or handling sensitive HR/customer data, decide early:
- What data can go to external APIs?
- What must stay in your controlled environment?
- What needs masking, tokenization, or redaction?
This isn’t paperwork. It’s what stops your AI pilot from dying in InfoSec review.
4) Budget like a CFO, experiment like a product team
Nebius reported a wider quarterly loss while revenue jumped because it’s spending heavily on capacity. That’s the AI business model in 2026: upfront investment to serve future demand.
For Singapore businesses, the equivalent discipline is:
- Set a monthly inference budget by team
- Define “free experimentation” limits
- Track unit economics (per ticket, per lead, per report)
- Build a rollback plan if costs spike
A simple rule I recommend: If a workflow can’t show a clear ROI within 60–90 days, it stays in sandbox.
“People also ask” questions Singapore teams raise (and straight answers)
Is more global data centre capacity going to make AI cheaper for my business?
Eventually, yes—but not evenly. More supply tends to reduce price pressure over time, but premium capacity (top GPUs, best regions, low-latency setups) can remain expensive. Your biggest savings usually come from routing (matching tasks to model cost) and reducing waste (unnecessary tokens, poor prompts, repeated calls).
Do SMEs in Singapore need private AI infrastructure?
Rarely. Most SMEs should start with managed AI services and focus on process design, data readiness, and governance. Private or dedicated setups become relevant when you have strict compliance requirements or very high, predictable volumes.
What’s the first “serious” AI tool investment that pays off?
For many Singapore businesses, it’s one of these:
- Customer support triage + draft replies (with human approval)
- Sales enablement: knowledge-base search + call summarization
- Finance ops: invoice classification + exception handling
These are repeatable, measurable, and tied to clear unit economics.
What to do next: a 30-day action plan for AI adoption in Singapore
Answer first: In the next 30 days, you can reduce AI adoption risk by locking down use cases, costs, and governance—before you scale usage.
Here’s a plan that works even if you don’t have an in-house ML team:
- Pick two workflows (one customer-facing, one internal) with clear volume and ROI.
- Define success metrics: time saved, accuracy, CSAT impact, conversion lift.
- Create a data handling checklist: what data is allowed, what must be removed.
- Run a vendor bake-off using your real data (anonymized), scored weekly.
- Ship a controlled rollout (10–20 users) and monitor cost per transaction.
If a European provider can justify billions of dollars to meet AI demand, the practical message for Singapore leaders is: AI isn’t a side project anymore—your competitors are operationalising it.
The forward-looking question to end on: When compute becomes cheaper and more available, will your business be ready to scale AI safely—or will you still be arguing about who owns the prompts?