Loading video player...

How OpenObserve cuts observability costs by 140x

A four-year-old open source observability platform is challenging industry giants like DataDog, Splunk, and Elasticsearch with dramatically lower costs and superior performance.

In this interview from KubeCon, Prabhat Sharma, founder and CEO of OpenObserve, explains how modern technology choices enable their platform to process over 2.5 petabytes of data daily while reducing infrastructure requirements by 80% and storage costs by up to 140 times compared to legacy solutions.

Watch this video to understand how OpenObserve is reimagining observability for the cloud-native era with AI-powered automation and architectural decisions that weren’t possible when older platforms were designed.

What you’ll learn from this interview

Sharma reveals the technical innovations that make OpenObserve’s dramatic performance improvements possible, including the choice of Rust as a programming language and Parquet as a storage format. You’ll discover why these seemingly simple decisions enable customers to replace five-node Elasticsearch clusters with a single OpenObserve node while maintaining search performance and gaining 10x better analytics capabilities.

The conversation explores real-world deployment challenges at massive scale, including the moment OpenObserve discovered undocumented hard limits in Google Cloud Platform that no one knew existed. Learn how processing petabytes of data daily reveals infrastructure constraints that most companies never encounter.

Perhaps most intriguing is Sharma’s vision for the future of observability, where traditional dashboards and user interfaces become obsolete. He explains how AI-powered SRE agents will automatically detect, analyze, and eventually remediate infrastructure problems without human intervention, fundamentally changing how organizations approach system reliability.

Key insights covered in the video

  • How OpenObserve reduces a five-node cluster to one node with better performance
  • The architectural decisions that enable 140x lower storage costs than Elasticsearch
  • Why enterprise features are free for companies under 200GB daily ingestion
  • Real-world challenges of processing 2.5 petabytes of data per day
  • The undocumented Google Cloud limitation discovered at petabyte scale
  • How AI SRE agents compress terabytes of logs into analyzable insights
  • Why the future of observability eliminates dashboards entirely
  • The technology stack choices that enable extreme performance gains

Whether you’re struggling with observability costs, exploring alternatives to DataDog or Splunk, or interested in how AI is transforming infrastructure management, this interview provides valuable insights into the next generation of observability platforms designed for cloud-native, data-intensive environments.