Google Cloud’s December 2025 releases add AI agents, MCP governance, and Model Armor controls. Here’s what security teams should do next.

Google Cloud’s December AI Updates for Secure Ops
The fastest way to understand where cloud security is heading is to watch what cloud platforms quietly ship in release notes. Mid-December 2025 was one of those weeks: Google Cloud pushed a cluster of updates that, taken together, signal a clear shift toward AI-assisted operations becoming part of the infrastructure layer—not just something you bolt on.
For anyone responsible for AI in cybersecurity, this matters for a simple reason: the attack surface is expanding (more APIs, more agents, more data paths), while the tolerance for operational mistakes is shrinking. The practical win isn’t “more AI.” It’s fewer blind spots, faster triage, tighter identity and key controls, and safer agentic workflows.
Below is what stood out in the latest Google Cloud release notes—through a security and operations lens—and how to turn these updates into real-world improvements in cloud security posture.
AI is moving into the data layer (and that changes security)
When generative AI runs closer to your data—inside databases or within analytics engines—the security questions shift from “Who can access the model?” to “Who can access model-powered actions on sensitive data?”
Several releases point directly at that trend.
Database “data agents” are the new privileged interface
Google introduced data agents (Preview) for multiple database products:
- AlloyDB for PostgreSQL
- Cloud SQL for MySQL
- Cloud SQL for PostgreSQL
- Spanner
Data agents are conversational interfaces that can interact with data. In security terms, this is a new control plane. Treat it like one.
What I’d do before letting any team adopt database agents:
- Define an “agent access tier” the same way you define production access tiers. Agents shouldn’t inherit broad human privileges by accident.
- Decide where prompts and responses are stored and for how long (logs become sensitive data fast).
- Put prompt and response filtering in place (more on Model Armor later) before the first rollout—not after an incident.
Gemini model availability in data systems increases both capability and risk
Google expanded Gemini model options in key places:
- AlloyDB: Gemini 3.0 models, including Gemini 3 Flash (Preview) available for generative AI functions (for example,
AI.GENERATE). - Vertex AI: Gemini 3 Flash public preview with stronger reasoning/coding and multimodal capabilities.
- Gemini Enterprise: Gemini 3 Flash (Preview) can be enabled via admin controls.
This matters because stronger reasoning means stronger automation—and stronger automation means mistakes can scale. If a model can take action (or generate SQL that’s executed), your guardrails need to be operational, not theoretical:
- Policy-based controls (who can call which AI function, from where)
- Rate limits and anomaly detection for AI-enabled query patterns
- Auditable approvals for any “write” capability
API security gets real about agents (MCP, multi-gateway governance)
APIs are already the core attack surface in cloud environments. Agentic systems expand that surface again because agents don’t just call APIs—they coordinate across them.
Model Context Protocol (MCP) becomes a first-class citizen
Google introduced several key pieces around Model Context Protocol (MCP):
- Apigee API hub: MCP support as an API style, including tool extraction from MCP specs.
- Cloud API Registry (Preview): discover/govern MCP servers and tools across your org.
- BigQuery remote MCP server (Preview): enables LLM agents to perform data tasks.
Security stance: MCP is not “just another spec.” It’s the interface layer for tools. Tools are where data exfiltration, privilege escalation, and unintended side effects happen.
What to operationalize:
- Inventory MCP servers/tools like you inventory APIs.
- Enforce standards (authentication, logging, data classification) at the registry/hub level.
- Threat model tool execution: every tool is effectively a micro-integration with its own blast radius.
Advanced API Security for multi-gateway projects (real governance)
Apigee Advanced API Security now supports centralized risk scoring and security profiles across:
- Apigee X
- Apigee hybrid
- Apigee Edge Public Cloud
This is the right direction. Most organizations don’t fail at API security because they lack WAF rules. They fail because security posture is inconsistent across gateways and environments.
If you’ve got multiple gateways, the practical playbook is:
- Create one organizational security profile baseline.
- Use custom security profiles per business unit only when necessary.
- Track security score drift by gateway/environment.
Also note: current limitations exist around VPC Service Controls for this add-on—plan deployments accordingly.
Security controls are catching up to AI reality (Model Armor + AI Protection)
The most important security updates in this batch are about putting enforceable guardrails around agentic and generative systems.
Model Armor expands from “nice to have” to operational control
Security Command Center updates include:
- Model Armor monitoring dashboard is GA.
- Model Armor integration with Vertex AI is GA.
- Model Armor floor settings (Preview) for Google-managed MCP servers to define baseline filters.
- Model Armor integration with Google Cloud MCP servers (Preview).
Here’s the thing about “AI safety filters”: they’re only useful if they’re centralized, observable, and enforced. Model Armor is moving in that direction.
A practical baseline I’ve found works:
- Block obvious injection patterns (tool override attempts, credential harvesting prompts).
- Sanitize model responses before they enter downstream automation.
- Log sanitization events to Cloud Logging and alert on spikes (spikes often correlate with active probing).
AI Protection and Agent Engine Threat Detection signal a new SOC workload
Security Command Center added:
- AI Protection (GA in Enterprise tier; Preview in Premium tier)
- Agent Engine Threat Detection (Preview)
This is a direct acknowledgement of what security teams are already seeing: AI agents are becoming production workloads, and they need detection coverage.
If you’re deploying agents on Vertex AI Agent Engine:
- Treat the agent runtime as a monitored production environment.
- Define what “normal” looks like (tool call rates, destinations, prompt sizes).
- Build a response plan for AI-specific incidents (prompt injection, tool misuse, data leakage).
Infrastructure and operations updates that quietly improve security
Not every security improvement comes labeled “security.” Some come from reliability, visibility, and standards enforcement.
Single-tenant Cloud HSM (GA) is a big deal for regulated security
Cloud KMS introduced Single-tenant Cloud HSM as GA in:
us-central1,us-east4,europe-west1,europe-west4
This is a strong option for organizations that need dedicated HSM partitions and stricter administrative control. It also enforces quorum approval and 2FA using keys managed outside Google Cloud.
Use cases that actually justify single-tenant HSM:
- Payment systems with strict key custody expectations
- Highly regulated government workloads
- Cryptographic separation requirements beyond multi-tenant assurances
Access Approval “access insights” (GA) helps answer the hardest audit question
Access Approval’s new access insights feature provides an org-wide report of Google administrative access to your data.
From an incident response and compliance view, this supports:
- Faster scoping during investigations
- Cleaner audit evidence collection
- Better internal reporting to risk teams
Cloud Load Balancing RFC enforcement reduces noisy edge behavior
Starting December 17, 2025, non-RFC-compliant request methods are rejected earlier by Google Front End for certain global external load balancers.
This won’t stop real attackers, but it can:
- Reduce inconsistent backend error patterns
- Slightly reduce “garbage traffic” reaching your apps
- Improve signal quality for anomaly detection at the edge
What to do next: a 30-day adoption checklist for security teams
Release notes are only useful if they turn into backlog items. If I were running cloud security operations heading into Q1 planning, I’d prioritize these actions.
1) Build an “agent surface area” inventory
- List every agent runtime (Vertex AI Agent Engine, custom runtimes, third-party)
- List every tool endpoint (including MCP tools)
- Map tools to data classifications and IAM permissions
2) Standardize AI guardrails with Model Armor
- Define minimum filtering policies (“floor settings”) for prompts and responses
- Turn on logging for sanitization operations
- Create alerts for spikes, repeated blocks, and policy violations
3) Centralize API risk scoring across gateways
- Enable multi-gateway risk assessment where applicable
- Align teams on what “acceptable” security score ranges are
- Tie API security score drift to operational reviews
4) Harden key management where it matters
- Identify workloads that truly need dedicated HSM
- Validate quorum/2FA processes and break-glass procedures
- Confirm key rotation and audit requirements are met
5) Prepare for January/February billing and deprecation changes
Some changes are time-bound:
- Apigee Debug v1 shutdown scheduled for January 15, 2026
- Vertex AI Agent Engine Sessions/Memory Bank/Code Execution begin charging on January 28, 2026
These aren’t “security” updates, but they affect operational continuity.
The direction is clear: AI-assisted security is becoming infrastructure
The thread running through these updates is consistent: AI capabilities are being pushed down into databases, API platforms, and cloud operations, and security controls are being built closer to where the work happens.
For the AI in cybersecurity story, that’s good news—but only if organizations match the platform’s pace with disciplined operations: inventory, least privilege, consistent policy enforcement, and strong observability.
If your 2026 security roadmap still treats “AI security” as a side project, you’ll be playing catch-up. The more realistic approach is to treat agentic systems as production workloads and govern them like everything else you care about.