Back in 2020, A Russian state-sponsored group got into SolarWinds’ build system and inserted command and control (c2) code into a routine software update for a network monitoring tool called Orion (wiki link). It was all over the news, and for good reason given the extent of the breach (into particularly sensitive parts of the US government) and the lengthy recovery process which will likely take years. Given its high profile, I’m shocked to report that I feel very little has been learned from that attack.
To me, the hack was a wake-up call about how the way we install and run software is insecure by design and needs a rework, maybe using capabilities-based security. But all I hear about is a bunch of solutions that kinda miss the point. Let’s go over all of those first.
“We should sign and verify all our dependencies”
In the wake of the SolarWinds hack, interest in “securing the software supply chain” grew considerably, including a May 2021 executive order telling NIST/CISA to develop some guidelines about the subject. The Supply-chain Levels for Software Artifacts (SLSA) framework also launched that same year and has been steadily growing in popularity.
Don’t get me wrong: I appreciate the extra interest in this area. However, the fact remains that malicious code can be signed and verified too, depending on how deeply in the supply chain the attackers are. And they can get pretty deep with state-sponsored cyber criminal skills. Anything could happen in the background of your CI worker (or your laptop) between when you execute git checkout
and make
. Any checksums you generate or check can be modified right before you check them. Or maybe your /usr/local/bin/sha256sum
has been tampered with. The list goes on.
When we’re talking about getting all major open source projects (which have little to no funding) to add enough security to resist nation-states (which have plenty of funding), the math simply doesn’t add it.
“We should disable automatic updates”
Automatic updates are a tradeoff, I’ll grant that. You are trusting a vendor to not ship a bad update in exchange for getting security fixes ASAP. However, just think for half a second about how the SolarWinds hack worked. The attackers snuck some code into an opaque, propriety, binary blob that lied dormant for 12-14 days before doing anything strange. There is absolutely no way we can perform a full binary analysis of every new version of every binary blob that powers modern IT.
Automating updates are generally recommended because it “helps to ensure the
timeliness and completeness of system patching operations”, as mentioned in NIST 800-53§3.19. If you do have the time for manual reviews AND audits that the manual updates have been applied, that’s preferable, but obviously that takes a lot of time. For everything else, automation keeps you safer. The SolarWinds hack changed nothing about that calculus.