The old fashioned language might not be used by many. But it’s still a part of my codebases.
As a long-time user (and active proponent) of Scheme/Common Lisp/Racket, I sometimes get asked why I stick with them. Fortunately, I have always headed up my own engineering organizations, so I’ve never had to justify it to management. But there’s an even more important constituency — my own engineering colleagues — who’ve never ever had the pleasure of using these languages. While they never ask for justification, they do ask out of intellectual curiosity, and sometimes out of wonder why I’m not going gaga over the next cool feature being dropped into Python or Scala, or whatever their flavor of the month is.
While the actual flavor of Lisp used has varied for me (Scheme, Common Lisp, Racket, Lisp-for-Erlang), the core has always remained the same: An s-expression based, dynamically typed, mostly functional, call-by-value λ-calculus-based language.
I started serious programming in my teens in BASIC on a ZX Spectrum+, although I had previously dabbled in (hand-) writing Fortran programs. It was a defining period for me as it truly defined my career path. Very quickly I was pushing the language to its limits and trying to write programs that were well beyond the limited capacity of the language and its implementation. I moved on to Pascal for a short while (Turbo Pascal on a DOS box), which was fun for a while, until I discovered C on Unix (Santa Cruz Operation Xenix!). That got me through a bachelor’s degree in Computer Science, but it always left me wanting for more expressiveness in my programs.
This was when I discovered Functional Programming (Thank you IISc!) in Miranda (Ugly Haskell’s very beautiful mom) and it opened my eyes to wanting beauty in my programs. My notion of expressiveness in a programming language began to take very large leaps. My concept for what programs should look like now began encompassing brevity, elegance, and readability.
Miranda wasn’t a particularly fast language, so execution speed was an issue. Miranda was also a statically typed language with Standard-ML style type inference. In the beginning, I was enamored with the type system. Over time, however, I grew to despise it. While it helped me catch a few things at compile-time, it mostly got in the way (more on this later).
A year or so after that, I ended up studying programming languages at Indiana University with Dan Friedman (of The Little Lisper / The Little Schemer fame). It was my introduction to Scheme, and the world of Lisp. I finally knew that I had found the perfect medium with which to express my programs. It has not changed in the last 25 years.
In this article, I’m trying to explain and explore, why it has been so. Am I just an old dinosaur who won’t change his ways? Am I too haughty and contemptuous of new ideas? Or am I just jaded? The answer, I think, is none of the above. I found perfection and nothing has come along yet to unseat it.
Let’s break it down a little. I said this a few paragraphs back:
An s-expression based, dynamically typed, mostly functional, call-by-value λ-calculus based language
I’m going to start explaining this — backward.
The fundamental entity in all programs is a function. Functions have an intentionality to them that form the foundational basis of the software design process. You’re always thinking about how information is acted upon, how it is transformed, and how it is produced. I have yet to find a foundational framework that captures this inherent intentionality (the ‘how’) that is better than the λ-calculus.
The word intentionality perhaps threw you off. Mathematics has two ways to think about functions. First, as a set of ordered pairs: (input, output). While this representation is a great way to prove theorems about functions, it is utterly useless when coding. This is also known as the extensional view of functions.
The second way to think about functions is as a transformation rule. For example, multiply the input by itself to get the output (which gives us the squaring function, conveniently abbreviated by every programming language as sqr). This is the intensional view of functions, which the λ-calculus captures nicely, and provides simple rules to help us prove theorems about our functions, without resorting to extensionality.
Now wait a minute, I’m sure you’re thinking. I’ve never proved sh*t about my functions. I’m betting that, in fact, you have. And that you do it all the time. You’re always convincing yourself that your function is doing the right thing. Yours may not be a formal proof (which may be what leads to some bugs), but reasoning about code is something that software developers do all the time. They’re playing the code back in their head to see how it behaves.
Languages based on the λ-calculus make it really easy to “play back the code” in your head. The simple rules of the λ-calculus mean that there are fewer things to carry in your head and the code is easy to read and understand.
Programming languages are, of course, practical tools, so the core simplicity has to be augmented in order to suit a broader purpose. This is why I love Scheme (and my current favorite flavor of it, Racket — CS, for those who care about such things). What it adds to the core λ-calculus is the bare minimum to make it usable. Even the additions follow the basic principles espoused by the λ-calculus, so there are few surprises.
This does mean, of course, that recursion is a way of life. If you’re one of those people for whom recursion never made sense, or if you still believe “recursion is inefficient,” then it’s high time to revisit it. Scheme (and Racket) effectively implement recursion as loops wherever possible. Not only that, the Scheme standard requires it.
This feature, called tail call optimization (or TCO), has been around for a few decades. It’s a sad commentary on the state of our programming languages that none of the modern languages support it. This is especially a problem with the JVM as newer languages have emerged trying to target the JVM as a runtime architecture. The JVM does not support it and consequently the languages built on top of the JVM have to jump through hoops to provide some semblance of a sometimes applicable TCO. So, I always view any functional language targeting the JVM with