GPU and data centre spending is exploding. Here’s what Singapore businesses should do to adopt AI tools safely, affordably, and at scale in 2026.

AI Cloud GPU Spending Is Rising—What SG Firms Do Now
Nebius just spent US$2.1 billion in a single quarter on GPUs and data centres—up from US$416 million a year earlier. That’s not a fun finance trivia fact. It’s a signal that the “AI compute race” is entering a new phase where capacity, power, and deployment speed decide who gets to ship real AI products—and who waits in line.
For Singapore businesses, this matters in a practical way: the tools you want to use (AI copilots for customer support, automated marketing content, analytics agents, RAG search over internal documents, workflow automation) all sit on top of GPU-backed cloud infrastructure. When cloud providers are buying power contracts measured in gigawatts and stacking Nvidia processors like it’s inventory for the next decade, they’re responding to one thing: demand that keeps outpacing supply.
This post is part of the AI Business Tools Singapore series—focused on how local teams actually adopt AI for marketing, operations, and customer engagement. The headline here isn’t “Nebius grew fast.” The headline is: AI infrastructure is getting built at speed, and Singapore companies should adjust their AI adoption strategy accordingly.
Source context: Nebius (an AI “neocloud” provider) reported sharply higher capex driven by AI processors and data centre investment, plus expansion plans across the US, France, Israel, and the UK. It also highlighted that demand is outpacing supply and that it can sell future capacity in advance. (CNA/Reuters, Feb 2026)
What Nebius’ capex surge actually tells us about AI in 2026
Answer first: This surge says AI isn’t “a feature” anymore—it’s becoming infrastructure, and infrastructure spend follows usage that’s already real.
Nebius reported:
- Capex ~US$2.1B in the December quarter (vs US$416M prior year)
- Q4 revenue US$227.7M, more than six-fold YoY (but below estimates)
- Net loss widened to US$249.6M
- Over 2 gigawatts (GW) of contracted power secured, with a target of 3 GW by year-end
- Plans for nine new data centre sites across multiple countries
If you’re running a business, the key insight is simple: the market is paying up for compute capacity because AI workloads are sticky and expanding. Training, fine-tuning, inference at scale, embeddings, multimodal models, and agentic workflows all burn compute. Even if you never train a model, your teams will still consume GPU time through commercial AI tools.
“Neoclouds” are booming for a reason
Answer first: Neoclouds exist because hyperscalers aren’t always the fastest route to GPU capacity and specialised AI performance.
Nebius (like CoreWeave, mentioned in the article) sells what many companies want right now: Nvidia-backed AI infrastructure as a service. These providers compete on GPU availability, cluster performance, deployment help, and sometimes pricing structures that are more tailored to AI teams.
For Singapore companies, you don’t need to pick a side in the “hyperscaler vs neocloud” debate. What you should take from this is that compute access is a strategic dependency, like payments infrastructure or logistics. Treat it that way.
Why Singapore businesses should care: AI tools are only as good as the compute behind them
Answer first: You’re buying “AI features,” but you’re implicitly buying compute availability, latency, compliance posture, and cost predictability.
In Singapore, many teams are now past experimentation. The question has shifted from “Can we use AI?” to “Can we deploy AI reliably without costs exploding or outputs going off the rails?” Infrastructure investment like Nebius’ is one reason AI capabilities keep improving—and also why usage-based pricing can surprise you.
Here’s what I’ve found works: stop thinking about AI adoption as one project. Think of it as a portfolio of workloads, each with different compute needs.
Common AI workloads in SG companies (and what they demand)
-
Customer support copilots (chat + knowledge base)
- Needs: fast inference, reliable retrieval, strong governance
- Risk: hallucinations and confidential leakage if access controls are weak
-
Marketing content production (ads, landing pages, localisation)
- Needs: moderate inference, brand guardrails, human review workflow
- Risk: inconsistent tone, compliance issues in regulated industries
-
Sales enablement (proposal drafting, call summaries, CRM updates)
- Needs: integration with CRM + data residency considerations
- Risk: “shadow AI” if the official tool is slow or blocked
-
Ops automation (invoice parsing, procurement, SOP assistants)
- Needs: stable pipelines, document processing, audit trails
- Risk: brittle automations if exception handling isn’t designed
When cloud providers spend billions on GPUs and data centres, it’s because all of the above are scaling at the same time, across many industries.
The real constraint isn’t “AI ideas”—it’s power, capacity, and unit economics
Answer first: AI expansion is hitting physical limits (power, space, chips), and that shapes pricing and availability for everyone.
The Nebius article highlights contracted power in gigawatts. That’s a clue most business leaders overlook. For AI, electricity isn’t a footnote—it’s the bottleneck.
What this means for Singapore companies:
-
Plan for price volatility in AI usage.
- If your “AI tool” bill is tied to tokens, images, minutes, or GPU hours, you need forecasting.
-
Expect capacity rationing during demand spikes.
- Product launches, new model releases, and enterprise adoption waves can create temporary shortages.
-
Design for efficiency early.
- The cheapest token is the one you don’t generate. Prompt discipline and retrieval design lower costs materially.
A practical way to think about AI costs: cost per business outcome
Instead of tracking “tokens used,” track:
- Cost per resolved ticket (support)
- Cost per qualified lead (marketing)
- Cost per proposal generated (sales)
- Cost per document processed (ops)
If you can’t connect compute usage to outcomes, AI becomes a line item you’ll cut the moment budgets tighten.
A Singapore-ready playbook: adopt AI tools without getting trapped by compute costs
Answer first: Pick high-ROI workflows, put guardrails in place, and build a two-layer strategy (tools now, platform later).
Most companies get this wrong by doing the opposite: they start by “choosing a platform,” then hunt for use cases. The better sequence is:
1) Start with 2–3 workflows that have measurable pain
Good candidates in Singapore tend to be:
- Support queues that spike around campaigns or seasonal periods
- Marketing teams producing multi-language variants (EN/ä¸ć–‡/MS/Tamil)
- Finance teams handling repetitive document work
- HR teams answering policy questions repeatedly
Write the success metric before you pick the tool.
2) Use commercial AI tools first—but demand governance
Commercial tools are fast to roll out. But don’t accept “it’s secure” as a slide.
Ask for:
- Data handling clarity (training usage, retention, access)
- Admin controls (SSO, role-based access)
- Audit logs
- Model options (so you can switch based on cost/performance)
3) Treat retrieval as a product, not a checkbox
Answer first: Most hallucination problems are retrieval problems.
If you’re building an internal AI assistant, your advantage isn’t the model—it’s your documents, SOPs, pricing rules, and customer history.
Do the unglamorous work:
- Curate the knowledge base
- Set document ownership and freshness rules
- Build “approved answers” for high-risk topics (pricing, legal, HR)
4) Decide where you need “AI cloud” vs “AI tool”
A simple split:
- If the use case is generic (writing, summarising, brainstorming): buy a tool.
- If the use case touches proprietary data, needs integrations, or must be auditable: consider an AI cloud approach (or a managed deployment) so you can control architecture, logs, and cost.
Nebius’ spending spree is basically a bet that more companies will move from tool usage to production workloads that require serious infrastructure.
What to watch in 2026: signals that affect SG AI roadmaps
Answer first: Watch capacity, pricing models, and compliance requirements—not just model benchmarks.
Three signals that will matter for Singapore businesses this year:
1) Capacity gets “pre-sold”
Nebius’ CEO said they can sell future capacity well in advance because demand exceeds supply. Translation: the best infrastructure gets booked early.
If you’re planning an AI-heavy rollout (contact centre automation, AI search across enterprise docs, analytics agents), don’t wait until Q4 to procure.
2) Power and data centre footprint become competitive moats
The article notes expansion across multiple geographies and power contracting. This is the infrastructure layer catching up with AI demand. It also means vendors will market “availability” and “throughput” as features.
For SG teams, the practical question is: where will your data be processed, and what controls do you have?
3) Unit economics will separate real adoption from hype
Nebius grew revenue fast but still posted a larger net loss. That’s common in infrastructure build-outs. For customers, it’s a reminder to avoid vendor lock-in where possible:
- Keep prompts, templates, and knowledge bases portable
- Abstract model access (so you can switch providers)
- Negotiate predictable pricing where you can
A straightforward next step for Singapore teams
If you’re responsible for AI adoption—marketing ops, customer experience, IT, or general management—do this next week:
- List your top 10 repetitive tasks by hours spent.
- Pick the top 2 that are low-risk and measurable.
- Run a 14-day pilot with clear cost and accuracy tracking.
- Decide whether you’re staying at “tool level” or moving to “platform level.”
The infrastructure boom (like Nebius’ GPU and data centre spending) is good news: it means AI capability is getting cheaper per unit over time if you design properly. The bad news is that sloppy adoption gets expensive fast, because every unnecessary prompt and every poorly scoped workflow burns compute.
Where do you want AI to sit in your company by end-2026—an occasional helper, or an operational layer your team actually trusts?