Skip to content Skip to footer
0 items - $0.00 0

It’s the end of observability as we know it (and I feel fine) by gpi

It’s the end of observability as we know it (and I feel fine) by gpi

It’s the end of observability as we know it (and I feel fine) by gpi

23 Comments

  • Post Author
    techpineapple
    Posted June 11, 2025 at 1:10 am

    I feel like the alternate title of this could be “how to 10x your observability costs with this one easy trick”. It didn’t really show a way to get rid of all the graphs, the prompt was “show me why my latency spikes every four hours”. That’s really cool, but in order to generate that prompt you need alerts and graphs. How do you know you’re latency is spiking to generate the prompt?

    The devil seems to be in the details, but you’re running a whole bunch more compute for anomaly detection and “ Sub-second query performance, unified data storage”, which again sounds like throwing enormous amounts of more money at the problem. I can totally see why this is great for honeycomb though, they’re going to make bank.

  • Post Author
    stlava
    Posted June 11, 2025 at 2:00 am

    I feel that if you need an LLM to help pivot between existing data it just means the operability tool has gaps in user functionality. This is by far my biggest gripe with DataDog today. All the data is there but going from database query to front end traces should be easy but is not.

    Sure we can use an LLM but I can for now click around faster (if those breadcrumbs exist) than it can reason.

    Also the LLM would only point to a direction and I’m still going to have to use the UI to confirm.

  • Post Author
    physix
    Posted June 11, 2025 at 2:04 am

    I'd like to see the long list of companies that are in the process of being le cooked.

  • Post Author
    ok_dad
    Posted June 11, 2025 at 2:08 am

    "Get AI to do stuff you can already do with a little work and some experts in the field."

    What a good business strategy!

    I could post this comment on 80% of the AI application companies today, sadly.

  • Post Author
    AdieuToLogic
    Posted June 11, 2025 at 2:10 am

    This post is a thinly veiled marketing promo. Here's why.

    Skip to the summary section titled "Fast feedback is the only feedback" and its first assertion:

      ... the only thing that really matters is fast, tight
      feedback loops at every stage of development and operations.
    

    This is industry dogma generally considered "best practice" and sets up the subsequent straw man:

      AI thrives on speed—it'll outrun you every time.
    

    False.

    "AI thrives" on many things, but "speed" is not one of them. Note the false consequence ("it'll outrun you every time") used to set up the the epitome of vacuous sales pitch drivel:

      To succeed, you need tools that move at the speed of AI as well.
    

    I hope there's a way I can possibly "move at the speed of AI"…

      Honeycomb's entire modus operandi is predicated on fast
      feedback loops, collaborative knowledge sharing, and
      treating everything as an experiment. We’re built for the
      future that’s here today, on a platform that allows us to
      be the best tool for tomorrow.
    

    This is as subtle as a sledgehammer to the forehead.

    What's even funnier is the lame attempt to appear objective after all of this:

      I’m also not really in the business of making predictions.
    

    Really? Did the author read anything they wrote before this point?

  • Post Author
    zug_zug
    Posted June 11, 2025 at 2:37 am

    As somebody who's good at RCA, I'm worried all my embarrassed coworkers are going to take at face value a tool that's confidently incorrect 10% of the time and screw stuff up more instead of having to admit they don't know something publicly.

    It'd be less bad if the tool came to a conclusion, then looked for data to disprove that interpretation, and then made a more reliably argument or admitted its uncertainty.

  • Post Author
    resonious
    Posted June 11, 2025 at 3:03 am

    The title is a bit overly dramatic. You still need all of your existing observability tools, so nothing is ending. You just might not need to spend quite as much time building and staring at graphs.

    It's the same effect LLMs are having on everything, it seems. They can help you get faster at something you already know how to do (and help you learn how to do something!), but they don't seem to outright replace any particular skill.

  • Post Author
    stego-tech
    Posted June 11, 2025 at 3:06 am

    Again, sales pitch aside, this is one of the handful of valuable LLM applications out there. Monitoring and observability have long been the exclusive domains of SRE teams in large orgs while simultaneously out of reach to smaller orgs (speaking strictly from an IT perspective, NOT dev), because identifying valuable metrics and carving up heartbeats and baselines for them is something that takes a lot of time, specialized tooling, extensive dev environments to validate changes, and change controls to ensure you don’t torch production.

    With LLMs trained on the most popular tools out there, this gives IT teams short on funds or expertise the ability to finally implement “big boy” observability and monitoring deployments built on more open frameworks or tools, rather than yet-another-expensive-subscription.

    For usable dashboards and straightforward observability setups, LLMs are a kind of god-send for IT folks who can troubleshoot and read documentation, but lack the time for a “deep dive” on every product suite the CIO wants to shove down our throats. Add in an ability to at least give a suggested cause when sending a PagerDuty alert, and you’ve got a revolution in observability for SMBs and SMEs.

  • Post Author
    mediumsmart
    Posted June 11, 2025 at 3:52 am

    I thought the article was about the end of observability of the real world as we knew it and was puzzled why they felt fine.

  • Post Author
    kacesensitive
    Posted June 11, 2025 at 4:23 am

    LLMs won't replace observability, but they absolutely change the game. Asking "why is latency spiking" and getting a coherent root cause in seconds is powerful. You still need good telemetry, but this shifts the value from visualizing data to explaining it.

  • Post Author
    RainyDayTmrw
    Posted June 11, 2025 at 4:39 am

    I think we are, collectively, greatly underestimating the value of determinism and, conversely, the cost of nondeterminism.

    I've been trialing a different product with the same sales pitch. It tries to RCE my incidents by correlating graphs. It ends up looking like this page[1], which is a bit hard to explain in words, but both obvious and hilarious when you see it for yourself.

    [1]: https://tylervigen.com/spurious-correlations

  • Post Author
    favflam
    Posted June 11, 2025 at 4:58 am

    I find people relying way too much on AI tools. If I pay someone a salary, they need to understand the actually answer the give me. And their butt needs to be on the line if the answer is wrong. That is the purpose of them getting a salary. It is not just to do the work, but it is to be responsible for the results. AI breaks this in a lot of the use cases I see crop up on ycombinator.

    If some AI tools outstrips the ability for a human to be in the decision loop, then that AI tool's usefulness is not so great.

  • Post Author
    satisfice
    Posted June 11, 2025 at 5:04 am

    So many engineers feel fine about a tool that they cannot rely upon.

    Without reliability, nothing else matters, and this AI that can try hypotheses so much faster than me is not reliable. The point is moot.

  • Post Author
    geraneum
    Posted June 11, 2025 at 5:05 am

    > This isn’t a contrived example. I basically asked the agent the same question we’d ask you in a demo, and the agent figured it out with no additional prompts, training, or guidance. It effectively zero-shot a real-world scenario.

    As I understand, this is a demo they already use and the solution is available. Maybe it should’ve been a contrived example so that we can tell if the solution was not in training data verbatim. Not that it’s not useful what the LLM did but if you announce the death of observability as we know it, you need to show that the tool can generalize.

  • Post Author
    akrauss
    Posted June 11, 2025 at 6:19 am

    I would be interested in reading what tools are made available to the LLM, and how everything is wired together to form an effective analysis loop. It seems like this is a key ingredient here.

  • Post Author
    yellow_lead
    Posted June 11, 2025 at 6:19 am

    Did AI write this entire article?

    > In AI, I see the death of this paradigm. It’s already real, it’s already here, and it’s going to fundamentally change the way we approach systems design and operation in the future.

    How is AI analyzing some data the "end of observability as we know it"?

  • Post Author
    schwede
    Posted June 11, 2025 at 6:26 am

    Maybe I’m just a skeptic, but it seems like a software engineer or SRE familiar with the application should be able to come to the conclusion of load testing fairly easily. For sure not as fast like 80 seconds though which is impressive. As noted you still need an engineer to review the data and complete those proposed action items.

  • Post Author
    devmor
    Posted June 11, 2025 at 6:27 am

    As the AI growth cycle stagnates while valuations continue to fly wildly out of control and more and more of the industry switches from hopeful to a bearish sentiment, I’ve started to find this genre of article extremely funny, if not pitiable.

    Who are you trying to convince with this? It’s not going to work on investors much longer, it’s mostly stopped working on the generically tech-inclined, and it’s never really worked on anyone who understands AI. So who’s left to be suckered by this flowery, desperate prose? Are you just trying to convince yourselves?

  • Post Author
    vanschelven
    Posted June 11, 2025 at 6:37 am

    > New abstractions and techniques… hide complexity, and that complexity requires new ways to monitor and measure.

    If the abstractions hide complexity so well you need an LLM to untangle them later, maybe you were already on the wrong track.

    Hiding isn't abstracting, and if your system becomes observable only with AI help, maybe it's not well designed, just well obfuscated. I've written about this before here: https://www.bugsink.com/blog/you-dont-need-application-perfo…

  • Post Author
    neuroelectron
    Posted June 11, 2025 at 6:50 am

    This would have been really nice to have when I was in Ops. Running MapReduce on logs and looking at dozens of graphs made up most of my working hours. We did eventually get the infrastructure for live filtering but that was just before the entire sector was outsourced.

  • Post Author
    catlifeonmars
    Posted June 11, 2025 at 7:37 am

    Was anyone else just curious about those odd spikes and was disappointed the article didn’t do a deeper dive to explain that unusual shape?

  • Post Author
    heinrichhartman
    Posted June 11, 2025 at 7:44 am

    > New Relic did this for the Rails revolution, Datadog did it for the rise of AWS, and Honeycomb led the way for OpenTelemetry.

    I find this reading of history of OTel highly biased. OpenTelemetry was born as the Merge of OpenCensus (initiated by Google) and OpenTracing (initiated by LightStep):

    https://opensource.googleblog.com/2019/05/opentelemetry-merg…

    > The seed governance committee is composed of representatives from Google, Lightstep, Microsoft, and Uber, and more organizations are getting involved every day.

    Honeycomb has for sure had valuable code & community contributions and championed the technology adoption, but they are very far from "leading the way".

  • Post Author
    Kiyo-Lynn
    Posted June 11, 2025 at 7:46 am

    I used to think that monitoring and alerting systems were just there to help you quickly and directly see the problems.But as the systems grew more complex, I found that the dashboards and alerts became overwhelming, and I often couldn’t figure out the root cause of the issue.
    Recently, I started using AI to help with analysis, and I found that it can give me clues in a few seconds that I might have spent half a day searching for.

    While it's much more efficient, sometimes I worry that, even though AI makes problem-solving easier, we might be relying too much on these tools and losing our own ability to judge and analyze.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.