You have a cool idea for a Rust application, quickly hack up a Cargo project for it, and hey, it works! You pull out that old laptop from university — the one that barely works because you convinced it to dual boot some version of Gentoo and Windows 7 — and hey, it works on there too! Dang, isn’t it nice for Rust projects to be so reliably portable between systems?
Happily convinced of your coding prowess, you mash cargo publish
into your terminal and tell your friends to cargo install axolotlsay
. It’s glorious:
axolotlsay "blub blub blub" +----------------+ | blub blub blub | +----------------+ /≽(◕ ᴗ ◕)≼
Then your friend who uses Ubuntu says
@tux4life:
cargo install
fails because you’re using unstable features?
Huh??? You definitely aren’t using unstable features!
Oh. Oh no. They did apt install rustc
. Their compiler is SO OLD and you just HAD to use new features from the latest release! Clippy complained that you weren’t!!
Then your other friend who uses Windows chimes in
@dosboxer: what’s cargo?
Oh right, you haven’t infected them with your obsession with Rust yet.
Maybe cargo install
isn’t that great? Maybe you should make prebuilt binaries! Yeah! You’ve definitely seen some of the bigger Rust projects do that. Like uh, ripgrep! They’ve got Github Releases with prebuilt binaries! Looks like they have some Github CI that generates it!
Hmm, that looks kinda reasonable, although they do a bunch of stuff that doesn’t make sense for your project, so I guess you’ve gotta comment out a bunch of these shell scripts and, oh damn you made a typo and the CI job failed 20 minutes in. And again. And again…
Congrats kid you’re A Release Engineer now and your life is hell. Enjoy debugging basic typos on a remote machine with 20 minute latency because you sure can’t run those Github CI bash-scripts-in-yaml files locally! Couldn’t be me! I definitely don’t have several repos with dozens of prerelease git tags from me trying to debug the dang release process!
But anyway you get it working and link your friends the Github Release and then they just come back with
@tux4life: i’m getting weird glibc errors when I try to run it…
@dosboxer: windows just “protected my PC” and prevented it from running…
@bigmacfan: how do I get this on my PATH?
@nixuserdottxt: it just says “file not found”???
There’s Gotta Be A Better Way!™️
As you can see, you’re going to run into annoying end-user issues regardless of how you choose to publish a Rust application.
This section is maybe just an excuse for me to rant about all these random problems with Release Engineering… you can skip to the next section if you just want to learn more about cargo-dist, I GUESS.
If you’re going to put no effort into it, cargo publish
and cargo install
are… surprisingly effective! Say what you will about Cargo, but it’s really dang reliable when it comes to building a random project on crates.io on a random machine! This makes it a pretty solid option for tools-for-other-rust-developers. Just a few keystrokes and a few minutes of waiting and I’ve reliably got a development machine setup with cargo-release, cargo-binstall, cargo-fuzz, cargo-insta, and so on! Heck it will probably even work on systems the application was never tested on, because Rust builds tend to Just Work.
But there’s definitely some tradeoffs:
- You need to have a compatible version of Rust: You need to hold back on using The Absolute Latest Rust Features if you want to support people with older Rust toolchains (something that’s more reasonable for libraries, but needlessly restrictive for applications). Also on rare occasion a newer compiler could lose the ability to compile the program, but that’s not a huge concern with Rust (Rust devs try hard to avoid this, but nothing’s perfect).
- Not everyone has a Rust toolchain: whether you’re making a CLI devtool or The Next Big Videogame, you’re making an application for someone who probably doesn’t care about the language it’s written in and just wants to download and run it! Won’t anyone think of the Gamers!?
- Some things can’t be published to crates.io: Most obviously you might have proprietary source code or secrets-that-are-needed-at-build-time that can’t be published. But even if you don’t care about that you might just run into size limits on crates.io if your application has too many static assets like images.
- Building rust packages takes a lot of time and resources: Look we’re not talking Chromium build times, but if you have a task that takes 5 minutes and 4 of those are building the Rust application that does the work… that adds up. Heck, if the target hardware is too weak or obscure it might be literally impossible to run rustc on it!
If any of those are too big of a problem for you, then you probably want to build official binaries/installers for your applications and have users download and run them directly. Unfortunately, when you try to do this you quickly learn that cargo install
was solving a lot of annoying problems for you:
- System Dependencies: Rust applications don’t have many system requirements (equivalent to C), but they’re unavoidable. Any time your application needs a system API you’re saying “I won’t run on machines that don’t have this API”. When you build your Rust applications, do you take the time to say what Linux glibc versions you support? What macOS SDKs you support? If not, the compiler has no idea what to choose other than “whatever’s supported by the current machine”. That can very easily result in a binary which runs great on the machine it was built on but can’t run at all on other systems. This is one of many reasons folks justifiably love musl for prebuilt Linux binaries, but now you need to get musl-tools and learn about cross-compiling and… 😭
- Building For Everything: Speaking of cross-compilation… how many computers do you have? Windows? macOS? Linux? Oh great there’s ARM64 macbooks now? Oh no they’ve got Linux running on those too now..? Wait, what’s RISC-V? Tools like cargo-cross definitely help with reducing the number of machines you need, but also most Rust devs I know run away screaming as soon as you say “Docker”, to say nothing of the fact that Apple… Really Does Not Want You Doing That (and weirdly NixOS? (unless you’re making a Flatpak?? (but those don’t work for CLIs??? (aAaAAAaaAAAA????))))
- Building Locally: Of course the Easy Solution for most of your cross-compilation woes is that you use something like Github CI to spin up a bunch of native machines for the targets you care about (and for the platforms you do need to cross-compile for you’re hopefully doing a smaller leap, like x64 macOS -> ARM64 macOS). taiki-e/crate-gh-release-action is absolutely wonderful for streamlining a lot of this! But now your build system is… checks notes… shell scripts embedded in an executable yaml file that has its execution distributed across 7 different machines over the network? I really hope you don’t need to debug that locally (sobs in Windows user).
- Hosting The Artifacts: Nice binaries you’ve got there, where are you putting them? How are people going to find them? How are they going to know which one to use? Ah just uploaded some
.tar.gz
bundles to a Github Release your users need to find? And they need to know they wantaarch64-unknown-linux-macos.tar.gz
for an arm macbook? - Downloading, Installing, Updating: It sure was handy that
cargo install
handled fetching over the network and unpacking the contents of the package! And that it put it on your PATH! And registered the application such thatcargo uninstall
and updates work! Now your users need to fumble around with unpacking it manually, find the actual application, and put it in a custombin
directory they’ve added to their PATH! - Scary Warnings: Another thing
cargo install
was doing for you was making the binary implicitly trusted by the system. With prebuilt binaries on macOS and Windows, your user is probably going to get some scary warnings about untrusted applications they need to bypass with a secret handshake! Fun! (Did you know the warning you get on Windows isn’t about code signing, but is actually just a special flag Windows’ builtin unzipping tool sets on all executables it extracts?)
And regardless of which approach you’re opting for, you’re probably dropping a lot of useful and important things on the ground!
- Debuginfo: By default
cargo install
andcargo build --release
disable most debuginfo, which makes sense, debuginfo is Very Big… but this is a false dichotomy! This is why we have split-debuginfo which lets us have a nice and small optimized binary with the big debuginfo on a symbol server somewhere for when a debugger or crashreporter needs it! Even if you don’t have a use for them today, once you’ve thrown out the build directory those things are gone forever! Are you archiving pdbs, dSYMs, and dwps for all your executables? Why not!? - Frame Pointers: I AM ASKING YOU AGAIN TO ENABLE FRAME POINTERS IN YOUR BUILDS, YOUR LOCAL PERFORMANCE ENGINEER WILL THANK YOU FOR THE GIFT.
- Link-Time Optimization: Did you know that
install
andcargo build --release
actually leave some performance on the table? For the sake of compile times (and resource usage), they don’t use the heaviest form of LTO. This is honestly probably the right call for the way most people use those commands, but if you’re building A Published Release there’s a decent argument to crank the knob back towards “hellish build times” for those last few percent!
There’s so many things to care about!!!
Alright so here’s my pitch on how to deal with those problems: cargo-dist, cargo build
but For Building Final Distributable Artifacts! No more copying around hard-to-run shell scripts, just run cargo dist
and it will handle everything for you — on any machine!
This includes (to various levels of “implemented”):
- Building all your binaries with Production Flags
- Copying the binaries and static assets into zips/tars (or more!)
- Gathering split-debuginfo files for all the binaries
- Generating installers (scope TBD, but at least curl-sh scripts)
But of course that doesn’t really solve all of those problems. I haven’t solved the treachery of CI scripting — OH BUT WAIT I HAVE. Because cargo-dist fundamentally understands what it’s supposed to do for any given platform/target, we can have my favourite command: cargo dist generate-ci
! This command generates the ci scripts to invoke itself!
For Github this means it will:
- Wait for a git tag to be pushed that looks like a version
- Create a draft Github Release™️ for that version
- Spin up machines to build all the targets
- Download a prebuilt version of cargo-dist
- Invoke cargo-dist as needed
- Upload the resulting artifacts to the Github Release™️
- Upload a machine-readable manifest describing all the artifacts
- On success of all tasks, mark the Github Release™️ as a non-draft
The easiest way to get that started is with another great command that already exists, cargo-release. So a way-too-quick start for this is just:
# install toolscargo install cargo-distcargo install cargo-release# one-time setup of ci scripts and build flagscargo dist init --ci=githubgit add .git commit -am "wow shiny new cargo-dist CI!"# cut a releasecargo release 0.1.0
After that you can just kick back and relax, because in a few minutes and you’ll have a Github Release with everything built and uploaded!
At this point it’s worth noting that you’re not locked into Github’s ecosystem, because you haven’t invested a drop of effort into it. All you did was tell cargo-dist “hey I’m using github” and it took the wheel. In principle (once implemented), switching to Gitlab or whatever else should be as easy as cargo dist generate-ci gitlab
. Or if you’ve got something more bespoke that we don’t support, well, the cargo dist
command was already doing most of the heavy lifting, so hand-writing CI orchestration for it should hopefully be a lot easier!
Or heck, go really bespoke and run cargo dist
locally, because again, you can! Really savor the experience of artisanally uploading the artifacts to your favourite FTP server.
Oh, and did I mention that this all happens to work well with another community tool, cargo-binstall? It tries to fetch prebuilt binaries for proj