Skip to content Skip to footer
Graceful Shutdown in Go: Practical Patterns by mkl95

Graceful Shutdown in Go: Practical Patterns by mkl95

9 Comments

  • Post Author
    wbl
    Posted May 4, 2025 at 10:10 pm

    If a distribute system relies on clients gracefully exiting to work the system will eventually break badly.

  • Post Author
    evil-olive
    Posted May 4, 2025 at 11:14 pm

    another factor to consider is that if you have a typical Prometheus `/metrics` endpoint that gets scraped every N seconds, there's a period in between the "final" scrape and the actual process exit where any recorded metrics won't get propagated. this may give you a false impression about whether there are any errors occurring during the shutdown sequence.

    it's also possible, if you're not careful, to lose the last few seconds of logs from when your service is shutting down. for example, if you write to a log file that is watched by a sidecar process such as Promtail or Vector, and on startup the service truncates and starts writing to that same path, you've got a race condition that can cause you to lose logs from the shutdown.

  • Post Author
    gchamonlive
    Posted May 5, 2025 at 1:11 am

    This is one of the things I think Elixir is really smart in handling. I'm not very experienced in it, but it seems to me that having your processes designed around tiny VM processes that are meant to panic, quit and get respawned eliminates the need to have to intentionally create graceful shutdown routines, because this is already embedded in the application architecture.

  • Post Author
    deathanatos
    Posted May 5, 2025 at 1:51 am

    > After updating the readiness probe to indicate the pod is no longer ready, wait a few seconds to give the system time to stop sending new requests.

    > The exact wait time depends on your readiness probe configuration

    A terminating pod is not ready by definition. The service will also mark the endpoint as terminating (and as not ready). This occurs on the transition into Terminating; you don't have to fail a readiness check to cause it.

    (I don't know about the ordering of the SIGTERM & the various updates to the objects such as Pod.status or the endpoint slice; there might be a small window after SIGTERM where you could still get a connection, but it isn't the large "until we fail a readiness check" TFA implies.)

    (And as someone who manages clusters, honestly that infintesimal window probably doesn't matter. Just stop accepting new connections, gracefully close existing ones, and terminate reasonably fast. But I feel like half of the apps I work with fall into either a bucket of "handle SIGTERM & take forever to terminate" or "fail to handle SIGTERM (and take forever to terminate)".

  • Post Author
    giancarlostoro
    Posted May 5, 2025 at 2:38 am

    I had a coworker that would always say, if your program cannot cleanly handle ctrl c and a few other commands to close it, then its written poorly.

  • Post Author
    zdc1
    Posted May 5, 2025 at 4:38 am

    I've been bitten by the surprising amount of time it takes for Kubernetes to update loadbalancer target IPs in some configurations. For me, 90% of the graceful shutdown battle was just ensuring that traffic was actually being drained before pod termination.

    Adding a global preStop hook with a 15 second sleep did wonders for our HTTP 503 rates. This creates time between when the loadbalancer deregistration gets kicked off, and when SIGTERM is actually passed to the application, which in turn simplifies a lot of the application-side handling.

  • Post Author
    eberkund
    Posted May 5, 2025 at 6:59 am

    I created a small library for handling graceful shutdowns in my projects: https://github.com/eberkund/graceful

    I find that I typically have a few services that I need to start-up and sometimes they have different mechanisms for start-up and shutdown. Sometimes you need to instantiate an object first, sometimes you have a context you want to cancel, other times you have a "Stop" method to call.

    I designed the library to help my consolidate this all in one place with a unified API.

  • Post Author
    cientifico
    Posted May 5, 2025 at 7:19 am

    We've adopted Google Wire for some projects at JustWatch, and it's been a game changer. It's surprisingly under the radar, but it helped us eliminate messy shutdown logic in Kubernetes. Wire forces clean dependency injection, so now everything shuts down in order instead… well who knows :-D

    https://go.dev/blog/wire
    https://github.com/google/wire

  • Post Author
    liampulles
    Posted May 5, 2025 at 7:30 am

    I tend to use a waitgroup plus context pattern. Any internal service which needs to wind down for shutdown gets a context which it can listen to in a goroutine to start shutting down, and a waitgroup to indicate that it is finished shutting down.

    Then the main app goroutine can close the context when it wants to shutdown, and block on the waitgroup until everything is closed.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.