Nxera’s new Tuas data centre signals rising AI demand in Singapore. Here’s what it means for AI tools, latency, cost, and scaling for SMEs.

AI-Ready Data Centres: What Nxera Tuas Means for SMBs
Singapore just got a serious infrastructure upgrade for AI.
On 9 Feb 2026, Singtel’s data centre arm Nxera opened DC Tuas—its largest facility to date, with 58MW of “AI-ready” capacity and a design positioned as its most energy-efficient yet. It’s also a signal: demand for AI, cloud, and high-performance computing (HPC) in Singapore isn’t slowing down, and the companies that plan their AI stack with infrastructure realities in mind will move faster (and waste less money).
This post is part of our AI Business Tools Singapore series, where we focus on the practical side of adopting AI for marketing, operations, and customer engagement. The point isn’t to admire data centres from afar. The point is to understand what changes when the underlying compute and network layer gets better—because that affects cost, latency, security, and what you can realistically deploy.
The real headline: AI adoption is constrained by infrastructure
If you’re rolling out AI tools inside a business, the “model” is rarely the bottleneck. The bottlenecks are usually compute availability, network performance, and cooling/power limits—especially for anything beyond basic text generation.
Nxera’s DC Tuas addresses those constraints in three concrete ways (all pulled directly from the announcement):
- 58MW of AI-ready capacity in one site, described as Singapore’s highest power-density data centre
- 120,000 sq ft across eight storeys
- More than 90% of capacity committed even before launch
That last point matters more than it looks. When a facility is mostly spoken for before it opens, it’s a strong market signal: compute demand is outpacing supply, and planning “later” becomes expensive.
For Singapore SMEs and mid-market firms, the practical implication is simple:
Your AI roadmap needs an infrastructure plan, even if you’re not buying GPUs.
Because you’ll still be affected by where your vendors run workloads, where your data sits, and how fast your systems can talk to each other.
Why DC Tuas being “hyperconnected” changes day-to-day AI performance
Nxera positioned DC Tuas as Singapore’s only hyperconnected data centre integrated with a cable landing station, providing direct access to international and domestic networks, plus carrier neutrality.
Lower latency isn’t a vanity metric
For many AI use cases, latency becomes an operational issue:
- Customer support copilots that must respond quickly inside chat or call-centre tooling
- Personalisation engines that assemble offers while a customer is still browsing
- Fraud/risk scoring that must run before a transaction completes
- Real-time inventory and dynamic pricing where “a few seconds later” is too late
If your AI vendor’s inference endpoint is far away (or your network path is messy), you pay in:
- Slower responses and lower conversion
- Timeouts and degraded user experience
- Bigger “buffer” architectures (more caching, more retries, more complexity)
A hyperconnected facility with direct cable connectivity reduces network hops and improves reliability. It doesn’t guarantee your specific app will be faster—but it raises the ceiling of what’s achievable in Singapore.
Reliability is the hidden requirement for AI tools
Teams tend to evaluate AI tools based on features: accuracy, prompt quality, integrations. Then they go live and discover the real KPI is uptime.
Better-connected data centres help providers deliver:
- More predictable performance
- Better redundancy options
- Cleaner multi-cloud connectivity
If you’re choosing AI business tools in Singapore (CRM copilots, marketing automation, call analytics, document intelligence), you should ask vendors one blunt question:
“Where is inference run, and what’s the network path to Singapore users and systems?”
Liquid cooling and power density: why it matters even if you don’t run GPUs
DC Tuas hosts Singapore’s largest direct-to-chip liquid cooling deployment, described as reducing energy and water consumption versus conventional cooling.
That sounds like a data-centre engineering detail, but it connects directly to business outcomes.
AI workloads are heat workloads
Modern AI infrastructure—especially GPUs for training and high-throughput inference—pushes extreme power densities. Cooling becomes a gating factor. When cooling is limited, the market responds in predictable ways:
- Higher costs for premium capacity
- Longer lead times
- More aggressive quotas and usage constraints
When providers can cool efficiently, they can pack more compute into the same footprint, which supports scaling AI services. You might never touch a GPU. But if your AI vendor does, this influences:
- How quickly they can expand capacity
- How they price burst usage
- Whether performance drops during peak demand
Sustainability isn’t just PR anymore
In Singapore, data centre capacity is constrained, and energy efficiency is not optional—regulatory pressure and energy costs make it a business requirement.
Here’s the stance I’ll take: if your AI adoption plan ignores energy and efficiency, you’re betting against Singapore’s direction of travel. The vendors that win will be the ones that can scale responsibly.
What this means for AI business tools in Singapore (marketing, ops, CX)
Nxera’s announcement focuses on hyperscalers and enterprise demand. But the second-order effect is that SMEs get better AI tooling when infrastructure improves—because vendors can run more workloads locally and reliably.
Marketing teams: more real-time, more personalised, less batch processing
When latency and capacity improve, marketing automation becomes less “overnight segmentation” and more “in-the-moment decisions.” Practical examples you can implement in 2026:
- On-site product recommendations that refresh within the session
- Lead scoring that updates as new intent signals arrive (web, email, events)
- Creative testing where AI generates variants and performance feedback loops faster
If you’re serious about this, build your stack around three data flows:
- First-party behavioural events (web/app)
- Customer profile and transaction history (CRM/ERP)
- Fast inference endpoint (vendor AI or your own)
Infrastructure like DC Tuas makes (3) easier to host close to the data and user.
Operations: document AI and forecasting get easier to industrialise
Ops teams are adopting AI fastest where it removes repetitive work:
- Invoice and PO extraction
- Contract review and clause identification
- Quality checks and anomaly detection
- Demand forecasting and workforce scheduling
The challenge is moving from “pilot” to “everyday system.” That’s where stable compute and network connectivity matters: integrations run reliably, and workflows don’t stall.
A useful rule:
If your AI workflow touches finance, logistics, or compliance, treat it like production software—not an experiment.
That means monitoring, fallback paths, and predictable performance.
Customer engagement: voice, video, and multilingual tools need better pipes
Singapore businesses increasingly need multilingual AI (English + Chinese/Malay/Tamil and regional variants) across channels. Higher-density compute and better interconnect make it more realistic for vendors to support:
- Real-time call transcription and summarisation
- Sentiment and intent detection during calls
- Automated QA on 100% of customer interactions (not just samples)
These tools are infrastructure-hungry. They’ll work in a demo on a good day; they’ll only work in production when the backend can scale.
How to choose AI tools when compute is tight (a practical checklist)
Because more than 90% of DC Tuas capacity was committed before launch, it’s smart to assume that premium capacity and low-latency setups will continue to be in demand.
Use this checklist when you evaluate AI vendors or plan internal deployments.
1) Ask where data is processed—and get a specific answer
You want clarity on:
- Data residency (Singapore vs region)
- Where inference runs (Singapore vs overseas)
- Whether they can support private networking options
Vague answers are a red flag.
2) Separate “nice-to-have AI” from “business-critical AI” early
Not every workflow needs low latency and high availability. Categorise use cases:
- Tier 1 (critical): customer support, fraud checks, order routing, pricing
- Tier 2 (important): sales enablement, reporting copilots, document processing
- Tier 3 (experimental): internal brainstorming, ad copy variants, prototypes
Then architect accordingly. Tier 1 needs stronger guarantees.
3) Budget for integration and monitoring, not just licenses
Most AI projects don’t fail because the model is bad. They fail because:
- Data pipelines break
- Prompts drift as business rules change
- Output quality isn’t monitored
Allocate time and budget for:
- Evaluation sets (your own examples)
- Automated QA checks
- Human review loops for edge cases
4) Don’t overbuild—start with AI tools that match your maturity
If you’re an SME, building your own GPU stack is rarely the first move. Start with AI business tools that already solve a specific workflow, then expand into custom models only when:
- You have stable data foundations
- You can measure ROI per workflow
- Vendor solutions can’t meet constraints (latency, privacy, domain specificity)
Singapore’s bigger signal: AI infrastructure is becoming a competitive moat
Nxera also said additional AI-ready capacity is coming in Batam and Johor in 2H 2026, and that its total operational and pipeline capacity is expected to rise from 200MW (2026) to over 400MW in the mid-term. Separately, a Singtel-led consortium with KKR is taking full ownership of ST Telemedia Global Data Centres in a major regional infrastructure move.
If you’re running a Singapore business, here’s the practical stance:
The companies that treat AI infrastructure choices like a core business decision will out-execute those who treat AI as a plugin.
That doesn’t mean you personally need to buy data centre capacity. It means you should choose tools and partners that can scale in Singapore’s constraints: capacity, cost, security expectations, and performance.
Next steps: turn infrastructure progress into business results
A new AI-ready data centre doesn’t automatically improve your P&L. But it makes it easier for vendors to deliver reliable AI services locally—and it gives you more options for secure, low-latency deployments.
If you’re planning your 2026 AI roadmap, do two things this month:
- List your top 3 AI workflows (one for marketing, one for operations, one for customer engagement) and define what “good” looks like in numbers: response time, cost per task, error rate.
- Audit your tool stack for where compute happens and where data flows. If you can’t answer that, you’re not ready to scale.
Singapore is building the foundations for the next wave of AI adoption. The question is whether your organisation will build on top of it deliberately—or keep running experiments that never make it into daily operations.