By Dale Weiler GitHub
- Time To Read: ~30 Minutes
- Last Updated: Monday, March 17th 2025
Summary
Linux binary compatibility is plagued by one thing that is often overlooked when evaluating shipping software on Linux. This article will deconstruct how to arrive to that conclusion, how to address it when shipping software today and what needs to be done to actually fix it.
Table of contents
Introduction
At JangaFX, we make several products that run natively on Linux. We love the flexibility and power that Linux offers our developers, but shipping software on it is a whole different challenge.
Linux is an incredibly powerful platform, but when it comes to shipping software, it can feel like a minefield. Unlike other operating systems, Linux isn’t just one system—it’s a chaotic mix of different services, libraries, and even philosophies. Every distribution does things a little differently, and that means the same executable that works flawlessly on one system might completely break on another.
This shouldn’t even be a problem. The Linux kernel itself has maintained relatively stable system calls. But everything built on top of it changes constantly in ways that break compatibility, making it incredibly frustrating to ship software that “just works.” If you’re developing for Linux, you’re not targeting a single platform—you’re navigating an ecosystem that has evolved without much concern for binary compatibility.
Some of us, coming from the game industry before moving into VFX have dealt with this problem before. Shipping games on Linux has always been a nightmare, and the same issues persists regardless of industry. In this article, We’re going to explain why we think containers are the wrong approach, how we build and ship Linux software in a way that actually works, what we think is responsible for Linux’s binary compatibility problem and what needs to change to fix it.
The latter part of this article will get deeply technical about what exactly the problem is and how it can be fixed.
Containers
Tools like Flatpak, AppImage, and similar solutions attempt to simplify shipping executables by creating “containers”—or as as we’ve recently taken to calling them, “a Linux Environment inside a Linux” Using Linux features like namespaces and chroots, these solutions package an entire Linux environment, complete with all required dependencies, into a single self-contained bundle. In extreme cases, this means shipping an entire Linux user-space just for one application.
One of the major challenges with these containerized solutions is that they
often don’t work well with applications that need to interact with the rest of
the system. To access hardware-accelerated APIs like
OpenGL,
Vulkan,
VDPAU or
CUDA, an application must dynamically
link against the system’s graphics driver libraries. Since these libraries exist
outside the container and cannot be shipped with the application, various
“pass-through” techniques have been developed to work around this, some of which
introduce runtime overhead (e.g., shimming libraries). Because containerized applications are isolated from the system, they often feel isolated too. This creates consistency issues, where the application may not recognize the user’s name, home directory, system settings, desktop environment preferences, or even have proper access to the filesystem.
To work around these limitations, many containerized environments rely on the
XDG Desktop Portal
protocol, which introduces yet another layer of complexity. This system requires
IPC (inter-process communication) through
DBus just to grant applications
access to basic system features like file selection, opening URLs, or reading
system settings—problems that wouldn’t exist if the application weren’t
artificially sandboxed in the first place.
We don’t believe that piling on more layers is an acceptable solution. As engineers, we need to stop and ask ourselves: “should we keep adding to this tower of Babel?”, or is it time to peel back some of these abstractions and reevaluate them? At some point, the right solution isn’t more complexity—it’s less.
While containerized solutions can work under certain conditions, we believe that shipping lean, native executables—without containers—provides a more seamless and integrated experience that better aligns with user expectations.
Versioning
When you compile your application, it links against the specific library versions present on the build machine. This means that by default, the versions on a user’s system may not match, causing compatibility issues. Let’s assume the user has all the necessary libraries installed, but the versions don’t match what your application was built against. This is where the real problem begins.
Short of shipping the exact machine used to deploy your application, how do you ensure compatibility with the versions installed on a user’s system?
We believe there are two ways to solve this problem, and we’ve given them our own names:
- Replication Approach – This means bundling all the libraries from the build machine and shipping them with your application. This is the philosophy behind Flatpak and AppImage. We do not use this approach at JangaFX.
- Relaxation Approach – Instead of relying on specific or newer library versions, you link against versions that are so old they’re almost guaranteed to be compatible everywhere. This minimizes the risk of incompatibility on a user’s system.
The first approach works well in cases where the necessary libraries may not exist on a user’s machine, but it fails for libraries that cannot be shipped (we call these “system libraries”) The second approach is particularly effective for system libraries and is the approach we use at JangaFX.
System Libraries
There are various libraries present on a Linux machine that cannot be shipped because they are system libraries. These are libraries tied to the system itself and cannot be provided in a container. Typically these include things like user-space drivers for the GPU, enterprise installed components for security, and of course, libc itself.
If you’ve ever tried to distribute Linux binaries, you may have encountered an error message like this:
/lib64/libc.so.6: version `GLIBC_2.18' not found
For those unaware, glibc (GNU C Library) provides the C standard library, POSIX APIs, and the dynamic linker responsible for loading shared libraries, and itself.
GLIBC is an example of a “system library” that cannot be bundled with your
application because it includes the dynamic linker itself. This linker is
responsible for loading other libraries, some of which may also depend on
GLIBC—but not always. Complicating matters further, since GLIBC is a dynamic
library, it must also load itself. This self-referential, chicken-and-egg
problem highlights GLIBC’s complexity and monolithic design, as it attempts to
fulfill multiple roles simultaneously. A large downside to this monolithic
design is that upgrading GLIBC often requires upgrading the entire system. Later in this article, we will
explain why this structure needs to change to truly solve Linux’s binary
compatibility problem.
Before you suggest statically linking GLIBC—that’s not an option. GLIBC relies on dynamic linking for features like NSS modules, which handle hostname resolution, user authentication, and network configuration, among other dynamically loaded components. Static linking breaks this because it does not include the dynamic linker, which is why GLIBC does not officially support it. Even if you managed to statically link GLIBC—or used an alternative like musl—your application would be unable to load any dynamic libraries at runtime. Static linking the dynamic linker itself is not possible, for reasons that will be explained later. In short, this would prevent your application from dynamically linking against any system libraries at all.
Our Approach
Since our application relies on many non-system libraries that may not be installed on the user’s system, we need a way to include them. The most straightforward approach is the Replication Approach, where we ship these libraries alongside our application. However, this negates the benefits of dynamic linking, such as shared memory usage and system-wide updates. In such cases, statically linking these libraries into the application is a better choice, as it eliminates dependency issues entirely. It also enables additional optimizations, such as LTO, and results in a smaller package by stripping unused components from the included libraries.
Instead, we take a different approach: statically linking everything we can. When doing so, special care is needed if a dependency embeds another dependency within its static library. We’ve encountered static libraries that include object files from other static libraries (e.g., libcurl), but we still need to link them separately. This duplication is conveniently avoided with dynamic libraries, but with static libraries, you may need to extract all object files from the archive and remove the embedded ones manually. Similarly, compiler runtimes like libgcc
default to dynamic linking. We recommend using -static-libgcc
.
Finally, when it comes to dealing with system libraries, we use the Relaxation Approach. Rather than requiring exact or newer versions of system libraries, we link against versions that are old enough to be nearly universally compatible. This increases the likelihood that the user’s system libraries will work with our application, reducing dependency issues without the need for containerization or bundling system components and shims.
The method we suggest when linking against older system libraries is to obtain a corresponding older Linux environment. You don’t need to install an old Linux version on physical hardware or even set up a full virtual machine. Instead, a chroot provides a lightweight, isolated environment within an existing Linux installation, allowing you to build against an older system without the overhead of full virtualization. Ironically, this suggests that containers were the right solution all along—just not at runtime, but at build time.
To achieve this, we use debootstrap, an excellent script that creates a minimal Debian installation from scratch. Debian is particularly suited for this approach due to its stability and long-term support for older releases, making it a great choice for ensuring compatibility with older system libraries.
Of course, once you have an older Linux setup, you may find that its binary package toolchains are too outdated to build your software. To address this, we compile a modern LLVM toolchain from source and use it to build both our dependencies and our software. The details of this process are beyond the scope of this article.
Finally, we automate the entire debootstrap process with a Python script, which we’ve included here for reference.
#!/bin/env python3 import os, subprocess, shutil, multiprocessing PACKAGES = [ 'build-essential' ] DEBOOSTRAP = 'https://salsa.debian.org/installer-team/debootstrap.git' ARCHIVE = 'http://archive.debian.org/debian' VERSION = 'jessie' # Released in 2015 def chroot(pipe): try: os.chroot('chroot') os.chdir('/') # Setup an environment for the chroot env = { 'HOME': '/root', 'TERM': 'xterm', 'PATH': '/bin:/usr/bin:/sbin:/usr/sbin' } # The Debian is going to be quite old and so the keyring keys will likely be # expired. To work around this we will replace the sources.list to contain # '[trusted=yes]' with open('/etc/apt/sources.list', 'w') as fp: fp.write(f'deb [trusted=yes] http://archive.debian.org/debian {VERSION} mainn') # Update and install packages subprocess.run(['apt', 'update'], env=env) subprocess.run(['apt', 'install', '-y', *PACKAGES], env=env) # # Script your Linux here, remember to pass `env=env` to subprocess.run. # # We suggest downloading GCC 7.4.0, compiling from source, and installing # it since it's the minimum version required to compile the latest LLVM from # source. We then suggest downloading, compiling from source, and installing # the latest LLVM, which as of time of writing is 20.1.0. # # You can then compile and install all other source packages your software # requires from source using this modern LLVM toolchain. # # You can also enter the chroot with an interactive shell from this script # by uncommenting the following and running this script as usual. # subprocess.run(['bash']) # # You can send messages to the parent with pipe.send() pipe.send('Done') # This one has special meaning in main except Exception as exception: pipe.send(exception) def main(): # We need to run as root to use 'mount', 'umount', and 'chroot' if os
11 Comments
sylware
This article missed a critical point which is "the right way" to select a glibc ABI version: see binutils ld documentation, second part of the page related to VERSION support.
This must include glibc internal symbols.
This will allow to craft ELF binaries on a modern distro which will run on "older" distros. This is critical for games and game engines. There is an significant upfront only-once work in order to select an "old" glibc ABI.
The quick and dirty alternative being having a toolchain configured to link with an "old" glibc on the side.
This article missed the -static-libstdc++ critical option for c++ applications (the c++ ABI is hell on earth), but did not miss the -static-libgcc and the dynamic loading of system interface shared libs.
superkuh
As an end user I often patch the glibc version incompatibility away with https://github.com/corsix/polyfill-glibc
This often leads to discovering new version incompatibilities in other libs. But as the article says others usually can be statically compiled.
Jeaye
I don't understand why they don't just statically link their binaries. First, they said this:
> Even if you managed to statically link GLIBC—or used an alternative like musl—your application would be unable to load any dynamic libraries at runtime.
But then they immediately said they actually statically link all of their deps aside from libc.
> Instead, we take a different approach: statically linking everything we can.
If they're statically linking everything other than libc, then using musl or statically linking glibc will finish the job. Unless they have some need for loading share libs at runtime which they didn't already have linked into their binary (i.e. manual dlopen), this solves the portability problem on Linux.
What am I missing (assuming I know of the security implications of statically linked binaries — which they didn't mention as a concern)?
bee_rider
Really the title should be something like “the difficulty of releasing binaries on Linux and how to work around it.” It isn’t really an atrocious state, the goal of a distro should be to get useful software into the repos. Software not distributed in an open source format doesn’t really help there.
James_K
Here's a thought: just distribute source code. ABI issues should be mostly fixed. Most computers can compile source code fast enough for the user not to notice and cache the results so that it's never a problem again. If you want optimised code, you can do a source to source optimisation then zip and minify the file. You could compile such a library to approximately native speeds without much user-end lag using modern JIT methods, and maybe even run LTO in a background thread so that the exectuables outdo dynamically linked ones.
lukeh
Windows having multiple C libraries has its own pain points, in particular it's difficult to ship binary libraries that return allocated memory to their consumer (you either need to have the library consumer allocate the memory, which probably explains why so many Win32 APIs have this behaviour, or allow alloc/free functions to be registered). Not to mention different C libraries having their own file handle, TLS, etc state. Unsurprisingly Microsoft now ships the Universal CRT (UCRT) as part of Windows.
BwackNinja
There is no distinction between system and program libraries in Linux. We used to pretend there was one before usrmigration, but that was never good to take seriously.
The distro as packager model ensures that everything is mixed together in the filesystem and is actively hostile to external packaging. Vendoring dependencies or static linking improves compatibility by choosing known working versions, but decreases incentive and ability for downstream (or users) to upgrade those dependencies.
The libc stuff in this article is mostly glibc-specific, and you'd have fewer issues targeting musl. Mixing static linking and dlopen doesn't make much sense, as said here[1] which is an interesting thread. Even dns resolution on glibc implies dynamic linking due to nsswitch.
Solutions like Snap, Flatpak, and AppImage work to contain the problem by reusing the same abstractions internally rather than introducing anything that directly addresses the issue. We won't have a clean solution until we collectively abandon the FHS for a decentralized filesystem layout where adding an application (not just a program binary) is as easy as extracting a package into a folder and integrates with the rest of the system. I've worked on this off and on for a while, but being so opinionated makes everything an uphill battle while accepting the current reality is easy.
[1] https://musl.openwall.narkive.com/lW4KCyXd/static-linking-an…
jll29
One of the questions of multiple versions on the same box is what about security issues of those older versions…
knowaveragejoe
No discussion of Cosmopolitan or APE?
mrbluecoat
> shipping software on Linux
That's a surprisingly hard nut to crack when containers won't work for your use case. We found https://github.com/silitics/rugix to work well in that situation.
brie22
Absolutely ridiculous that scrollbars are disabled for this page. The kids are too cool for such pedestrian web parts! Grow up and stop forcing your shitty UX on people.
Edit: Found it – it's black on black – even worse!