
It’s the end of observability as we know it (and I feel fine) by gpi
In a really broad sense, the history of observability tools over the past couple of decades have been about a pretty simple concept: how do we make terabytes of heterogeneous telemetry data comprehensible to human beings? New Relic did this for the Rails revolution, Datadog did it for the rise of AWS, and Honeycomb led the way for OpenTelemetry.
The loop has been the same in each case. New abstractions and techniques for software development and deployment gain traction, those abstractions make software more accessible by hiding complexity, and that complexity requires new ways to monitor and measure what’s happening. We build tools like dashboards, adaptive alerting, and dynamic sampling. All of these help us compress the sheer amount of stuff happening into something that’s comprehensible to our human intelligence.
In AI, I see the death of this paradigm. It’s already real, it’s already here, and it’s going to fundamentally change the way we approach systems design and operation in the future.
New to Honeycomb? Get your free account today.
LLMs are just universal function approximators, but it turns out that those are really useful
I’m going to tell you a story. It’s about this picture:

If you’ve ever seen a Honeycomb demo, you’ve probably seen this image. We love it, because it’s not only a great way to show a real-world problem—it’s something that plays well to our core strengths of enabling investigatory loops. Those little peaks you see in the heatmap represent slow requests in a frontend service that rise over time before suddenly resetting. They represent a small percentage of your users experiencing poor performance—and we all know what this means in the real world: lost sales, poor experience, and general malaise at the continued enshittification of software.
In a Honeycomb demo, we show you how easy it is to use our UI to understand what those spikes actually mean. You draw a box around them, and we run BubbleUp to detect anomalies by analyzing the trace data that’s backing this visualization, showing you what’s similar and what’s different between the spikes and the baseline. Eventually, you can drill down to the specific service and even method call that’s causing the problem. It’s a great demo, and it really shows the power of our platform.
Last Friday, I showed a demo at our weekly internal Demo Day. It started with what I just showed you, and then I ran a single prompt through an AI agent that read as follows:
Please investigate the odd latency spikes in the frontend service that happen every four hours or so, and tell me why they’re happening.

The screenshot here elides the remainder of the response from the LLM (please find the entire text at the end of this post), but there’s a few things I want to call out. First, this wasn’t anything too special. The agent was something I wrote myself in a couple of days; it’s just an LLM calling tools in a loop. The model itself is off-the-shelf Claude Sonnet 4. The integration with Honeycomb is our new Model Context Protocol (MCP) server. It took 80 seconds, made eight tool calls, and not only did it tell me why those spikes happened, it figured it out in a pretty similar manner to how we’d tell you to do it with BubbleUp.
This isn’t a contrived example. I basically asked the agent the same question we’d ask you in a demo, and the agent figured it out with no additional prompts, training, or guidance. It effectively zero-shot a real-world scenario.
23 Comments
techpineapple
I feel like the alternate title of this could be “how to 10x your observability costs with this one easy trick”. It didn’t really show a way to get rid of all the graphs, the prompt was “show me why my latency spikes every four hours”. That’s really cool, but in order to generate that prompt you need alerts and graphs. How do you know you’re latency is spiking to generate the prompt?
The devil seems to be in the details, but you’re running a whole bunch more compute for anomaly detection and “ Sub-second query performance, unified data storage”, which again sounds like throwing enormous amounts of more money at the problem. I can totally see why this is great for honeycomb though, they’re going to make bank.
stlava
I feel that if you need an LLM to help pivot between existing data it just means the operability tool has gaps in user functionality. This is by far my biggest gripe with DataDog today. All the data is there but going from database query to front end traces should be easy but is not.
Sure we can use an LLM but I can for now click around faster (if those breadcrumbs exist) than it can reason.
Also the LLM would only point to a direction and I’m still going to have to use the UI to confirm.
physix
I'd like to see the long list of companies that are in the process of being le cooked.
ok_dad
"Get AI to do stuff you can already do with a little work and some experts in the field."
What a good business strategy!
I could post this comment on 80% of the AI application companies today, sadly.
AdieuToLogic
This post is a thinly veiled marketing promo. Here's why.
Skip to the summary section titled "Fast feedback is the only feedback" and its first assertion:
This is industry dogma generally considered "best practice" and sets up the subsequent straw man:
False.
"AI thrives" on many things, but "speed" is not one of them. Note the false consequence ("it'll outrun you every time") used to set up the the epitome of vacuous sales pitch drivel:
I hope there's a way I can possibly "move at the speed of AI"…
This is as subtle as a sledgehammer to the forehead.
What's even funnier is the lame attempt to appear objective after all of this:
Really? Did the author read anything they wrote before this point?
zug_zug
As somebody who's good at RCA, I'm worried all my embarrassed coworkers are going to take at face value a tool that's confidently incorrect 10% of the time and screw stuff up more instead of having to admit they don't know something publicly.
It'd be less bad if the tool came to a conclusion, then looked for data to disprove that interpretation, and then made a more reliably argument or admitted its uncertainty.
resonious
The title is a bit overly dramatic. You still need all of your existing observability tools, so nothing is ending. You just might not need to spend quite as much time building and staring at graphs.
It's the same effect LLMs are having on everything, it seems. They can help you get faster at something you already know how to do (and help you learn how to do something!), but they don't seem to outright replace any particular skill.
stego-tech
Again, sales pitch aside, this is one of the handful of valuable LLM applications out there. Monitoring and observability have long been the exclusive domains of SRE teams in large orgs while simultaneously out of reach to smaller orgs (speaking strictly from an IT perspective, NOT dev), because identifying valuable metrics and carving up heartbeats and baselines for them is something that takes a lot of time, specialized tooling, extensive dev environments to validate changes, and change controls to ensure you don’t torch production.
With LLMs trained on the most popular tools out there, this gives IT teams short on funds or expertise the ability to finally implement “big boy” observability and monitoring deployments built on more open frameworks or tools, rather than yet-another-expensive-subscription.
For usable dashboards and straightforward observability setups, LLMs are a kind of god-send for IT folks who can troubleshoot and read documentation, but lack the time for a “deep dive” on every product suite the CIO wants to shove down our throats. Add in an ability to at least give a suggested cause when sending a PagerDuty alert, and you’ve got a revolution in observability for SMBs and SMEs.
mediumsmart
I thought the article was about the end of observability of the real world as we knew it and was puzzled why they felt fine.
kacesensitive
LLMs won't replace observability, but they absolutely change the game. Asking "why is latency spiking" and getting a coherent root cause in seconds is powerful. You still need good telemetry, but this shifts the value from visualizing data to explaining it.
RainyDayTmrw
I think we are, collectively, greatly underestimating the value of determinism and, conversely, the cost of nondeterminism.
I've been trialing a different product with the same sales pitch. It tries to RCE my incidents by correlating graphs. It ends up looking like this page[1], which is a bit hard to explain in words, but both obvious and hilarious when you see it for yourself.
[1]: https://tylervigen.com/spurious-correlations
favflam
I find people relying way too much on AI tools. If I pay someone a salary, they need to understand the actually answer the give me. And their butt needs to be on the line if the answer is wrong. That is the purpose of them getting a salary. It is not just to do the work, but it is to be responsible for the results. AI breaks this in a lot of the use cases I see crop up on ycombinator.
If some AI tools outstrips the ability for a human to be in the decision loop, then that AI tool's usefulness is not so great.
satisfice
So many engineers feel fine about a tool that they cannot rely upon.
Without reliability, nothing else matters, and this AI that can try hypotheses so much faster than me is not reliable. The point is moot.
geraneum
> This isn’t a contrived example. I basically asked the agent the same question we’d ask you in a demo, and the agent figured it out with no additional prompts, training, or guidance. It effectively zero-shot a real-world scenario.
As I understand, this is a demo they already use and the solution is available. Maybe it should’ve been a contrived example so that we can tell if the solution was not in training data verbatim. Not that it’s not useful what the LLM did but if you announce the death of observability as we know it, you need to show that the tool can generalize.
akrauss
I would be interested in reading what tools are made available to the LLM, and how everything is wired together to form an effective analysis loop. It seems like this is a key ingredient here.
yellow_lead
Did AI write this entire article?
> In AI, I see the death of this paradigm. It’s already real, it’s already here, and it’s going to fundamentally change the way we approach systems design and operation in the future.
How is AI analyzing some data the "end of observability as we know it"?
schwede
Maybe I’m just a skeptic, but it seems like a software engineer or SRE familiar with the application should be able to come to the conclusion of load testing fairly easily. For sure not as fast like 80 seconds though which is impressive. As noted you still need an engineer to review the data and complete those proposed action items.
devmor
As the AI growth cycle stagnates while valuations continue to fly wildly out of control and more and more of the industry switches from hopeful to a bearish sentiment, I’ve started to find this genre of article extremely funny, if not pitiable.
Who are you trying to convince with this? It’s not going to work on investors much longer, it’s mostly stopped working on the generically tech-inclined, and it’s never really worked on anyone who understands AI. So who’s left to be suckered by this flowery, desperate prose? Are you just trying to convince yourselves?
vanschelven
> New abstractions and techniques… hide complexity, and that complexity requires new ways to monitor and measure.
If the abstractions hide complexity so well you need an LLM to untangle them later, maybe you were already on the wrong track.
Hiding isn't abstracting, and if your system becomes observable only with AI help, maybe it's not well designed, just well obfuscated. I've written about this before here: https://www.bugsink.com/blog/you-dont-need-application-perfo…
neuroelectron
This would have been really nice to have when I was in Ops. Running MapReduce on logs and looking at dozens of graphs made up most of my working hours. We did eventually get the infrastructure for live filtering but that was just before the entire sector was outsourced.
catlifeonmars
Was anyone else just curious about those odd spikes and was disappointed the article didn’t do a deeper dive to explain that unusual shape?
heinrichhartman
> New Relic did this for the Rails revolution, Datadog did it for the rise of AWS, and Honeycomb led the way for OpenTelemetry.
I find this reading of history of OTel highly biased. OpenTelemetry was born as the Merge of OpenCensus (initiated by Google) and OpenTracing (initiated by LightStep):
https://opensource.googleblog.com/2019/05/opentelemetry-merg…
> The seed governance committee is composed of representatives from Google, Lightstep, Microsoft, and Uber, and more organizations are getting involved every day.
Honeycomb has for sure had valuable code & community contributions and championed the technology adoption, but they are very far from "leading the way".
Kiyo-Lynn
I used to think that monitoring and alerting systems were just there to help you quickly and directly see the problems.But as the systems grew more complex, I found that the dashboards and alerts became overwhelming, and I often couldn’t figure out the root cause of the issue.
Recently, I started using AI to help with analysis, and I found that it can give me clues in a few seconds that I might have spent half a day searching for.
While it's much more efficient, sometimes I worry that, even though AI makes problem-solving easier, we might be relying too much on these tools and losing our own ability to judge and analyze.