Discover how observability has transformed from monitoring infrastructure to measuring the true efficiency and business impact of AI-generated code in this conversation from KubeCon.
Andreas Grabner from Dynatrace shares 18 years of insights on a fundamental shift happening in software development. As AI increasingly writes production code, teams need new ways to understand not just what code is generated, but how efficiently AI systems work, which models perform best, and what actual business value results. This isn’t about vanity metrics like lines of code, it’s about measuring real outcomes.
In this video, Grabner explains why traditional observability approaches fall short for AI-driven development and what’s replacing them. Grabner reveals how OpenTelemetry standards now enable unprecedented visibility into AI workflows, from the initial prompt through every tool call and iteration until code ships. The implications are profound for both cost management and development velocity.
Burning tokens
The conversation explores critical questions facing development teams today: How do you measure whether AI is actually helping or just burning through tokens inefficiently? Which AI models work best for specific coding tasks? How can you detect when your AI agent is stuck in inefficient loops? And perhaps most importantly, how do you connect AI-generated code to actual business outcomes, such as feature adoption and revenue?
Key insights revealed
- Why observability now serves two personas: human developers and AI agents themselves
- The shift from generic large models to specialized models trained for specific development tasks
- How to compare model performance across use cases using A/B testing capabilities
- Why most developers are making a critical mistake by coding without observability feedback loops
- How automated remediation can fix inefficient AI workflows before costs spiral
- The role of model context protocol (MCP) in connecting AI to observability platforms
- What “observability in the AI native era” means for regulated industries and enterprise adoption
Whether you’re integrating AI into your development workflow, trying to control AI costs, or simply wondering whether your current AI tools are delivering value, this conversation offers actionable frameworks for measurement and optimization. Grabner also discusses his new book on the topic, co-authored with experts in security and regulated environments.
Watch the full interview to understand how leading organizations are moving beyond hoping AI helps to actually measuring and optimizing its impact on development velocity, code quality, and business outcomes.