Las Vegas, NV | Booth #2021
Find us at Google Cloud Next to learn more about enterprise-ready AI for complex, long-running agents—on your infrastructure, with built-in safety.
Las Vegas, NV | Booth #2021
Find us at Google Cloud Next to learn more about enterprise-ready AI for complex, long-running agents—on your infrastructure, with built-in safety.
Most AI agents degrade when task complexity increases or execution runs long. Claude is designed around that constraint.
Claude on Google Cloud's Vertex AI lets you build production-ready AI agents for long-running, complex tasks. Our advanced reasoning handles complex, multi-step tasks while Vertex AI provides the scalability and integration your team needs.
Deploy with built-in safeguards, enterprise-grade security, and the infrastructure you already use and trust.
Learn more about Claude on Vertex AI.
Tasks that run long, branch unexpectedly, or require sustained judgment across multiple systems expose the gap between what models can do in a demo and what they can do in production.
Organizations like Palo Alto Networks use Claude on Vertex AI to power production systems at scale.
Our joint customers include enterprises and startups across energy, commerce, software development, and more:
Got it! If you haven’t yet, you can get started on the Build Plan and begin using Claude via the API and Workbench now.
Multi-agent architectures are powerful but over-applied. This session from Anthropic helps you recognize when multiple agents genuinely outperform a single agent and how to build them when they do. You'll learn the three scenarios where multi-agent consistently wins (context isolation, parallel execution, specialization), four architecture patterns with concrete trade-offs, and the context-centric decomposition strategy most teams get wrong—plus verification subagents and the pitfalls that undermine even well-designed systems.
Teams without evals get stuck in reactive loops—catching issues in production, unable to distinguish regressions from noise. Teams that invest early find development accelerates as failures become test cases and metrics replace guesswork. This talk shares what we've learned building evals for Claude Code and deploying them with customers across coding, conversational, and research agents. You'll leave with a roadmap: structuring agent evals, choosing the right graders, balancing capability vs. regression testing, and building a suite you trust.
Enterprise software will change more in the next two years than it has in the last twenty. The model is inverting: top-down planning, where organizations define and prioritize what gets built, is giving way to bottom-up execution, where AI agents help teams solve problems as fast as they find them. Anthropic shares what we're seeing at the frontier. Teams are already deploying agents on Vertex AI that define, build, and iterate autonomously. We'll ground the vision in real examples and practical frameworks for readiness.
AI agents that work for hours are already in production and teams deploying them are pulling ahead. Design patterns for long-running agents are within reach today and will get more powerful as models advance. Anthropic shares lessons from frontier enterprise teams building with Claude, including a framework for long-running agent design applied to two domains: codebase modernization and turning requirements into working applications. You'll see agents in action and learn architectural patterns that make sustained autonomous work reliable.
Access cutting-edge research, connect with our community, and stay informed about the latest developments in responsible AI.