
Pipelining might be my favorite programming language feature by Mond_
Epistemic status: Don’t take it too seriously. Or do. idk, I can’t stop you.

Toggle original/dithered image
Pipelining might be my favorite programming language feature.
What is pipelining? Pipelining is the feature that allows you to omit a single argument from your
parameter list, by instead passing the previous value.
When I say pipelining, I’m talking about the ability to write code like this:
fn get_ids(data: Vec<Widget>) -> Vec<Id> {
data.iter() // get iterator over elements of the list
.filter(|w| w.alive) // use lambda to ignore tombstoned widgets
.map(|w| w.id) // extract ids from widgets
.collect() // assemble iterator into data structure (Vec)
}
As opposed to code like this. (This is not real Rust code. Quick challenge for the curious Rustacean, can
you explain why we cannot rewrite the above code like this, even if we import all of the symbols?)
fn get_ids(data: Vec<Widget>) -> Vec<Id> {
collect(map(filter(iter(data), |w| w.alive), |w| w.id))
}
I honestly feel like this should be so obvious that it shouldn’t even be up for debate. The first code
example—with its nice ‘pipelining’ or ‘method chaining’ or whatever you want to call it—it
just works. It can be read line-by-line. It’s easy to annotate it with comments. It doesn’t
require introduction of new variables to become more readable since it’s already readable as is.
As opposed to, y’know,
the first word in the line describing the final action our function performs.
Let me make it very clear: This is an article hot take about syntax. In practice,
semantics beat syntax every day of the week. In other words, don’t take it too seriously.
Second, this is not about imperative vs. functional programming. This article takes for granted that
you’re already on board with concepts such as ‘map’ and ‘filter’. It’s possible to overuse that style, but
I won’t talk about it here.
- You already agree with me
- Code Discovery
- Editing Benefits
- SQL
- The Builder Pattern
- Rust’s pipelining is pretty neat
- Conclusion
You already agree with me
Here is a feature that’s so bog-standard in modern programming languages that it barely feels like a
feature at all. Member access for structs or classes with our beloved friend the .
-operator.
This is a form of pipelining. It puts the data first, the operator in the middle, and concludes with the
action (restricting to a member field). That’s an instance of what I call pipelining.
type Bar struct {
field int
}
func get_field(bar Bar) int {
return bar.field
}
// vs. syntax like that of Python's `getattr` function
func get_field(bar Bar) int {
return getattr(bar, "field")
}
You see what I am getting at, right? It’s the same principle. One of the reasons why
x.y
-style member access syntax (and x.y()
-style method call syntax!) is popular
is since it’s easy to read and chains easily.
Let’s make the comparison slightly more fair, and pretend that we have to write x.get(y)
.
Compare:
fizz.get(bar).get(buzz).get(foo)
// vs.
get(get(get(fizz, bar), buzz), foo)
Which one of these is easier to read? The pipelined syntax, obviously. This example is easy to parse
either way, but imagine you’d like to blend out some information and purely focus on the final operation.
.get(foo)
// vs.
get(, foo)
You see the problem, right? In the first example, we have ‘all of the previous stuff’ and then
apply another operation to it. In the second example, the operation which we want to perform
(get
) and the new operand (foo
) are spread out with ‘all of the previous stuff’
sitting between them.
Looking back at our original example, the problem should be obvious:
fn get_ids(data: Vec<Widget>) -> Vec<Id> {
collect(map(filter(iter(data), |w| w.alive), |w| w.id))
}
-----------------------------1 // it's fun to parse the whole line to find the start
------------------------2
-----------------3
---------------------------------------4 // all the way back to find the second arg
-------------5
------------------------------------------------------6 // and all the way back again
-----7 // okay the final step is the first word in the line that makes sense
I cannot deny the allegations: I just don’t think it makes sense to write code like that as long as a
clearly better option exists.
Why would I have to parse the whole line just to figure out where my input comes in, and why is the data
flow ‘from the inside to the outside’? It’s kind of silly, if you ask me.
Editing Benefits

Toggle original/dithered image
Readability is nice, and I could add add a whole section complaining about the mess that’s Python’s
‘functional’ features.
However, let’s take a step back and talk about ease of editing. Going back to the example above, imagine
you’d like to add another map
(or any other function call) in the middle there. How easy is
this?
fn get_ids(data: Vec<Widget>) -> Vec<Id> {
collect(map(filter(map(iter(data), |w| w.toWingding()), |w| w.alive), |w| w.id))
}
Consider:
-
You’ll have to parse through the line, counting commas and parentheses to find the exact place to add
the closing parenthesis. -
The
git diff
of this is going to be basically unreadable, everything is crammed onto one
line. - This line is getting long and unreadable, and at that point you’ll want to refactor it anyway!
fn get_ids(data: Vec<Widget>) -> Vec<Id> {
data.iter()
.map(|w| w.toWingding())
.filter(|w| w.alive)
.map(|w| w.id)
.collect()
}
This is adding a single line of code. No parentheses counting. It’s easy and obvious. It’s easy to write
and easy to review. Perhaps most importantly, it shows up incredibly nicely in the
blame
layer of whatever editor or code exploration tool you’re using.
You might think that this issue is just about trying to cram everything onto a single line, but
frankly, trying to move away from that doesn’t help much. It will still mess up your git diffs and the
blame layer.
You can, of course, just assign the result of every filter
and map
call to a
helper variable, and I will (begrudgingly) acknowledge that that works, and is
significantly better than trying to do absurd levels of nesting.
Code Discovery
When you press .
in your IDE, it will show a neat little pop-up that tells you which methods
you can call or which fields you can access.
This is probably the single IDE feature with the biggest value add, and if not that, then at least the
single most frequently used one. Some people will tell you that static analysis for namespace or
module-level code discovery is useless in the age of AI autocompletion and vibe coding, but I very much
disagree.1
“grug very like type systems make programming easier. for grug, type systems most value when grug hit
dot on keyboard and list of things grug can do pop up magic. this 90% of value of type system or more to
grug” — grug
Words to live by. What he’s describing here is something that essentially requires pipelining to
work at all. (And types or type annotation, but having those is the direction the industry is moving in
anyway.)
It doesn’t matter if it’s the trusty .
operator, C++’s ->
, or if it’s
something more bespoke such as Elm’s or Gleam’s |>
or Haskell’s &
. In the
end, it’s a pipeline operator—the same principle applies. If your
LSP knows the type of
what’s on the left, it should in principle be able to offer suggestions for what to do next.
If your favorite language’s LSP/IDE does a poor job at offering suggestions during pipelining, then it’s
probably one of the following reasons:
-
You don’t know which type you’re even holding. This happens most often when the language is dynamically
typed, ’types’ are hard to deduce with static analysis, and you’re touching/writing code without t
34 Comments
SimonDorfman
The tidyverse folks in R have been using that for a while: https://magrittr.tidyverse.org/reference/pipe.html
jaymbo
This is why I love Scala so much
shae
If Python object methods returned `self` by default instead of `None` you could do this in Python too!
This is my biggest complaint about Python.
mexicocitinluez
LINQ is easily one of C#'s best features.
kordlessagain
While the author claims "semantics beat syntax every day of the week," the entire article focuses on syntax preferences rather than semantic differences.
Pipelining can become hard to debug when chains get very long. The author doesn't address how hard it can be to identify which step in a long chain caused an error.
They do make fun of Python, however. But don't say much about why they don't like it other than showing a low-res photo of a rock with a pipe routed around it.
Ambiguity about what constitutes "pipelining" is the real issue here. The definition keeps shifting throughout the article. Is it method chaining? Operator overloading? First-class functions? The author uses examples that function very differently.
kuon
That's also why I enjoy elixir a lot.
The |> operator is really cool.
drchickensalad
I miss F#
osigurdson
C# has had "Pipelining" (aka Linq) for 17 years. I do miss this kind of stuff in Go a little.
zelphirkalt
To one up this: Of course it is even better, if your language allows you to implement proper pipelining with implicit argument passing by yourself. Then the standard language does not need to provide it and assign meaning to some symbols for pipelining. You can decide for yourself what symbols are used and what you find intuitive.
Pipelining can guide one to write a bit cleaner code, viewing steps of computation as such, and not as modifications of global state. It forces one to make each step return a result, write proper functions. I like proper pipelining a lot.
mrkeen
The second approach is open for extension – it allows you to write new functions on old datatypes.
> Quick challenge for the curious Rustacean, can you explain why we cannot rewrite the above code like this, even if we import all of the symbols?
Probably for lack of
> weird operators like <$>, <*>, $, or >>=
epolanski
I personally like how effect-ts allows you to write both pipelines or imperative code to express the very same things.
Building pipelines:
https://effect.website/docs/getting-started/building-pipelin…
Using generators:
https://effect.website/docs/getting-started/using-generators…
Having both options is great (at the beginning effect had only pipe-based pipelines), after years of writing effect I'm convinced that most of the time you'd rather write and read imperative code than pipelines which definitely have their place in code bases.
In fact most of the community, at large, converged at using imperative-style generators over pipelines and having onboarded many devs and having seen many long-time pipeliners converging to classical imperative control flow seems to confirm both debugging and maintenance seem easier.
bnchrch
I'm personally someone who advocates for languages to keep their feature set small and shoot to achieve a finished feature set quickly.
However.
I would be lying if I didn't secretly wish that all languages adopted the `|>` syntax from Elixir.
“`
params
|> Map.get("user")
|> create_user()
|> notify_admin()
“`
chewbacha
Is this pipelining or the builder pattern?
cutler
Clojure has pipeline functions -> and ->> without resorting to OO dot syntax.
amelius
Am I the only one who thinks yuck?
Instead of writing: a().b().c().d(), it's much nicer to write: d(c(b(a()))), or perhaps (d ∘ c ∘ b ∘ a)().
1899-12-30
You can somewhat achieve a pipelined like system in sql by breaking down your steps into multiple CTEs. YMMV on the performance though.
singularity2001
I tried to convince the julia authors to make a.b(c) synonymous to b(a,c) like in nim (for similar reasons as in the article). They didn't like it.
dapperdrake
Pipelining in software is covered by Richard C. Waters (1989a, 1989b). Wrangles this library to work with JavaScript. Incredibly effective. Much faster at writing and composing code. And this code executes much faster.
https://dspace.mit.edu/handle/1721.1/6035
https://dspace.mit.edu/handle/1721.1/6031
https://dapperdrake.neocities.org/faster-loops-javascript.ht…
blindseer
This article is great, and really distills why the ergonomics of Rust is so great and why languages like Julia are so awful in practice.
hliyan
I always wondered how programming would be if we hadn't designed the assignment operator to be consistent with mathematics, and instead had it go LHS -> RHS, i.e. you perform the operation and then decide its destination, much like Unix pipes.
TrianguloY
Kotlin sort of have it with let (and run)
wslh
I also like a syntax that includes pipelining parallelization, for example:
A
.B
.C
tantalor
> allows you to omit a single argument from your parameter list, by instead passing the previous value
I have no idea what this is trying to say, or what it has to do with the rest of the article.
0xf00ff00f
First example doesn't look bad in C++23:
RHSeeger
I feel like, at least in some cases, the article is going out of its way to make the "undesired" look worse than it needs to be. Compairing
to
The first one would read more easily (and, since it called out, diff better)
Admittedly, the chaining is still better. But a fair number of the article's complaints are about the lack of newlines being used; not about chaining itself.
gus_leonel
[dead]
guerrilla
This is just super basic functional programming. Seems like we're taking the long way around…
duped
A pipeline operator is just partial application with less power. You should be able to bind any number of arguments to any places in order to create a new function and "pipe" its output(s) to any other number of functions.
One day, we'll (re)discover that partial application is actually incredibly useful for writing programs and (non-Haskell) languages will start with it as the primitive for composing programs instead of finding out that it would be nice later, and bolting on a restricted subset of the feature.
weinzierl
I suffer from (what I call) bracket claustrophobia. Whenever brackets get nested too deep I makes me uncomfortable. But I fully realize that there are people who are the complete opposite. Lisp programmers are apparently as claustrophil as cats and spelunkers.
pxc
Maybe it's because I love the Unix shell environment so much, but I also really love this style. I try to make good use of it in every language I write code in, and I think it helps make my control flow very simple. With lots of pipelines, and few conditionals or loops, everything becomes very easy to follow.
taeric
A thing I really like about pipelines in shell scripts, is all of the buffering and threading implied by them. Semantically, you can see what command is producing output, and what command is consuming it. With some idea of how the CPU will be split by them.
This is far different than the pattern described in the article, though. Small shame they have come to have the same name. I can see how both work with the metaphor; such that I can't really complain. The "pass a single parameter" along is far less attractive to me, though.
tpoacher
pipelines are great IF you can easily debug them as easily as temp variable assignments
… looking at you R and tidyverse hell.
layer8
The one thing that I don’t like about pipelining (whether using a pipe operator or method chaining), is that assigning the result to a variable goes in the wrong direction, so to speak. There should be an equivalent of the shell’s `>` for piping into a variable as the final step. Of course, if the variable is being declared at the same time, whatever the concrete syntax is would still require some getting used to, being “backwards” compared to regular assignment/initialization.
bluSCALE4
Same. The sad part is that pipelining seems to be something AI is really good at so I'm finding myself writing less of it.