I’ve been programming for a long time. When I say long time, I mean
decades, with an S. Hopefully that’s long enough. In that time my
experience has primarily been programming for contemporary platforms,
e.g. Linux, Windows, macOS on desktop-class or server-class CPU
architectures. Recently, I embarked on building a MIDI engine for a system with
significantly less processing power.
Soon after I started, I ran into the issue of guaranteeing that it
was impossible for the queue of input events to build up indefinitely.
This essentially boils down to making sure that each event handler
doesn’t run longer than some maximum amount. Then it hit me, I’ve heard
this before, maximum amount of time: I’m building a real-time
system.
Once I realized that I had to additionally take real-time constraints
into account while building, it drove a lot of the engineering decisions
I made in a specific direction. In particular, the worst case time of
every sequence of code must be accounted for, the average-case time was
irrelevant for correctness. Under this discipline, algorithms which had
better worst-case time but worse average-case time are preferred,
branching usually must be to the fast path, and adding fast paths to
slow algorithms was not helpful. It was interesting work and it changed
how I thought about building systems in a profound way.
Armed with this new awareness, I began to notice the lack of
real-time discipline in other applications, including my own. This was a
jarring experience, how could I have never noticed this before? The
juggernaut during this period was when I realized that most mainstream
desktop UI applications were fundamentally broken.
When I click a mouse button, when I press a key on the keyboard, I
expect a response in a bound