Google Cloud Launches AI Agent "Integration Platform"... Security and Governance Become Central

Google Cloud has released an integration platform called “Gemini Enterprise Agent Platform,” which consolidates the development and operation functions of AI (AI) agents. This move essentially reorganizes the existing Vertex AI into a new core hub designed to enable enterprises to handle the entire process of creating, deploying, and managing AI agents on a single platform.

This announcement was made at “Google Cloud Next 2026” held in Las Vegas, USA. The new platform not only integrates model selection, development, and agent building capabilities but also incorporates orchestration, DevOps, and security features. Google Cloud states that through this platform, technical teams can naturally deploy AI agents to employees via the newly launched “Gemini Enterprise” app after development. The goal is to automate work across the entire organization.

Google Cloud Vice President of Product Management Michael Gerstenhaber said in a blog, “If Vertex AI was originally designed to support large-scale engineering in the early generative AI era, we are now entering a stage where managing the complexity of agents operating across multiple systems is necessary.” He emphasized that without security and governance mechanisms, the proliferation of agents could make it difficult for enterprises to build trust.

“Agent Studio” for general employees and “ADK” for developers are strengthened

Google has enhanced AI agent building capabilities around “Agent Studio” and “Agent Development Kit (ADK).” Agent Studio offers a low-code visual interface accessible to non-developers, allowing drag-and-drop design of agent logic.

For professional developers, ADK focuses on more complex tasks. Google provides access to models with strong reasoning capabilities and has introduced a graphical framework that connects multiple sub-agents to handle complex problems. This makes it possible to build a “multi-agent team” rather than a single agent.

Additionally, connecting internal data has become easier. Google states that ADK supports ecosystem integration by default, enabling connection to enterprise data without custom pipelines. By linking with data platforms like BigQuery and Pub/Sub, agents can handle large-scale asynchronous tasks such as content evaluation or data analysis in the background.

New “Runtime” and “Memory” features for actual service operation

Google Cloud has also revamped the “Agent Runtime” to support AI agents running in real-world environments beyond proof-of-concept. The new Runtime simplifies deployment and supports long-running workflows lasting several days. It also includes orchestration features that allow agents to delegate tasks to each other, enabling complex workflows to be divided among multiple specialized agents.

The core “context retention” feature in large-scale operations has been strengthened. Google has created an “Agent Memory Bank” that can dynamically create and manage long-term memory based on dialogue content. Additionally, a “Memory Profile” has been added, allowing agents to reload detailed information with low latency. For enterprises, this can be seen as a mechanism to reduce context loss and improve accuracy in repetitive tasks.

Comprehensive security and governance deployment… assigning unique identities to each agent

Security and control are particularly emphasized in this platform. Google states that it has applied a “security embedded” architecture, ensuring that whether agents are created by customers or imported from partner ecosystems, the same enterprise policy controls can be applied.

The core is “agent identity.” Just as every person has an identity, each AI agent is assigned a unique encrypted ID designed to leave an auditable record of all its actions. These records can be linked to predefined permission policies, which is expected to help with internal compliance and accountability.

At the same time, a “Agent Registry” for centralized management of authorized tools and agents, and an “Agent Gateway” for overseeing overall agent operations, have been added. Google says these tools allow administrators to easily view the entire AI agent operation system and enforce consistent security policies. Threat detection and real-time behavior monitoring are provided through the “Agent Security” dashboard.

Support for the entire process from pre-deployment testing to operational optimization

Google Cloud also offers features for pre-validating AI agent performance and continuous improvement during operation. “Agent Simulation” allows users to test agents in controlled environments using virtual tools and synthetic workloads. After deployment, “Agent Evaluation” tools can continuously score task execution results.

Additionally, troubleshooting features are included for diagnosing issues. “Agent Observability” visualizes complex reasoning paths of agents, focusing on debugging faults and errors. When performance falls short of expectations, “Agent Optimizer” automatically adjusts system instructions to improve accuracy, aiding ongoing refinement.

Centered on Google models but supporting over 200 external models

While Google is likely to actively promote its own Gemini models, the open ecosystem strategy remains intact based on the information released. The company states that users can “preferentially” access over 200 models, including Gemini 3.1 Pro, Gemini 3.1 Flash, open-source models like Gemma 4, and music/audio generation models such as Lyria 3.

External models are also supported. Google indicates that third-party models like Anthropic PBC’s Claude 3.5 Sonnet and Haiku will be available. For enterprise customers, this provides flexibility to choose models based on business needs without being limited to specific ones.

Google Cloud’s release is not just about adding AI features but about re-integrating the enterprise AI agent operation system at the platform level. Especially under the goal of “autonomous enterprises,” the attempt to unify development, deployment, security, and optimization is significant. However, the stability of these AI agents in real enterprise environments and the extent to which security concerns are mitigated are expected to be key variables in future competitiveness.

TP AI Notice: This article is summarized based on TokenPost.ai’s foundational language model. Major content may be omitted or inconsistent with facts.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin