Loading video player...

17m 32s Cisco Live EMEA

How Ansible becomes the execution layer for agentic AI

What happens when AI agents need to make changes to your production infrastructure? Red Hat has architected a solution that balances autonomous AI decision-making with the trusted execution environments enterprises demand.

In this Techzine TV interview from Cisco Live EMEA, Sathish Balakrishnan reveals how the Ansible automation platform is evolving to support the agentic AI revolution sweeping through IT operations. The conversation uncovers technical innovations that most organizations haven’t yet considered—and strategic decisions that will shape how AI agents interact with critical infrastructure.

Discover why Red Hat chose to implement ephemeral MCP servers instead of traditional persistent deployments, and what this architectural decision means for security and access control. Learn how role-based permissions work when AI agents call into automation platforms, and why the execution layer matters more than you might think.

What you’ll learn in this video

Balakrishnan explains the fundamental challenge facing enterprises deploying agentic AI: organizations don’t want autonomous agents directly executing changes on core banking applications or critical servers. The solution requires an intermediary layer that AI agents can trust and that IT teams have relied on for years. But how does that actually work in practice?

The interview reveals Red Hat’s distinctive approach to MCP (Model Context Protocol) integration. An implementation that differs significantly from standard client-server models. You’ll understand why container-based ephemeral servers eliminate security vulnerabilities and how this architecture enables granular access control without complex gateway configurations.

Find out why Ansible Lightspeed, which has been generating playbooks with AI for four years, is now expanding beyond IBM WatsonX to support multiple AI models. The discussion covers the validation mechanisms that ensure AI-generated automation code meets syntactic and functional requirements before execution. Critical guardrails that prevent random AI outputs from affecting production systems.

Key topics covered

  • Why automation platforms serve as the interconnection layer AI agents require
  • How ephemeral MCP servers provide role-based access without persistent vulnerabilities
  • The parallel development strategy that allows customers to adopt MCP gradually
  • Post-processing validation that makes AI-generated playbooks trustworthy
  • Why frontier AI models now match specialized training from four years ago
  • The new drag-and-drop workflow engine announcement coming at Red Hat Summit
  • How Cisco’s full-stack edge solutions integrate with Red Hat’s hybrid cloud strategy
  • Why automation remains a peacetime initiative despite being most needed during crises

The conversation also explores the deepening partnership between Red Hat and Cisco, including GPL agreements that embed Red Hat’s software stack across Cisco AI Hub, Intersight, and upcoming Meraki integrations. Balakrishnan discusses how unified edge and branch solutions simplify the number of integration points required. This can potentially reduce the MCP servers needed in agentic architectures.

For IT leaders evaluating how agentic AI will impact their operations, this interview provides concrete technical details about implementation approaches, security considerations, and adoption timelines. The insights reveal both the opportunities and challenges of allowing AI agents to participate in infrastructure management while maintaining the control and trust that production environments demand.