In lines he composed for a play in the mid-1930s, T. S. Eliot wrote of those who
constantly try to escape
From the darkness outside and within
By dreaming of systems so perfect that no one will need to be good.
This has always struck me as a rather apt characterization of a certain technocratic impulse, an impulse that presumes that techno-bureaucratic structures and processes can eliminate the necessity for virtue, or maybe even human involvement altogether.1 We might just as easily speak of systems so perfect that no one will need to be wise or temperate or just. Simply adhere to the code or to the technique with unbending consistency, and all will be well.
This “dream,” as Eliot put it, remains compelling in many quarters. It is also tacitly embedded in the practices fostered by many of our devices, tools and institutions. In fact, a case could be made that the imperative to automate as much of human experience as possible operates as the unacknowledged purpose of contemporary technology. So it’s worth thinking about how this dream manifests itself today and why it can so easily take on a nightmarish quality.
In Eliot’s age, increasingly elaborate and byzantine bureaucracies automated human decision-making in the pursuit of efficiency, speed and scale, thus outsourcing human judgment and, consequently, responsibility. One did not require virtue or good judgment, only a sufficiently well-articulated system of rules. Of course, under these circumstances, a bureaucratic functionary might become a “papier-mâché Mephistopheles,” in Conrad’s memorable phrase, and thus abet forms of what Arendt later called banal evil. But the scale and scope of modern societies also seemed to require such structures to operate reasonably well. Whether strictly necessary or not, these systems introduced a paradox: in order to serve human society, they have to eliminate or displace key elements of the human experience. Of course, what becomes evident eventually is that these systems are not, in fact, serving human ends, at least not necessarily so.
To take a different class of example, we might think of the preoccupation with technological fixes to what may turn out to be irreducibly social and political problems. In a prescient essay from 2020 about the pandemic response, the science writer Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” There’s no need for good judgment, responsible governance, self-sacrifice or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.
Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Consider the case of NarxCare, a predictive program developed by Appriss Health, as reported in Wired in 2021. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. The details of the story are both fascinating and disturbing, but here’s the pertinent part for my purposes:
Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.
This is an obviously complex and sensitive issue, but it is hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility and cover-your-ass dynamics that have long