“The bargain we are being asked to ratify takes the form of a magnificent bribe.”- Lewis Mumford (1964)
Something has changed. At least in the way that AI is publicly discussed.
Where for a long time a position of excited admiration was the dominant reaction to AI, in recent months this is starting to shift. Of course, there have always been some who were openly critical of AI, and discussions of AI have long been tinged by an undercurrent of anxiety—but of late it seems that a level of skepticism, if not outright antagonism, towards AI is becoming increasingly acceptable. Or, to put it simply, not that long ago if you dared to criticize Chat-GPT or AI generated images you’d find yourself tarred as a Luddite. But more recently if you criticize Chat-GPT or AI generated images, your criticisms are likely to be met with sympathetic nods.
It would be difficult to explain this shift with reference to any single occurrence. It could be that the initial hype around certain public-facing AI products has diminished, and so now people are evaluating these things from a more sober perspective. It could be that, paired with the previous point, disappointment with the reality of actually existing AI products is leading some people to become increasingly distrustful of those hawking the next wave of AI products. It could be that research, such as Microsoft’s recent study that found that AI makes human cognition “atrophied and unprepared,” or the numerous reports about AI’s environmental toll are starting to have a real impact. And it could also be that people are just tired of the deluge of crappy AI generated images (still or moving) that are flooding social media platforms.
While all of the above are plausible, it seems that an even more likely explanation is that the politics around AI companies (and, by extension, the politics around AI) has changed. Though activists and scholars who work on technology have long argued that technology is political, few things have made that point clearer for many people than watching AI executives and AI personalities excitedly lining up behind the Trump administration. Though there is a surprisingly long history, going back pretty much to AI’s origins, of critical figures arguing that AI was the handmaiden of authoritarianism—that perspective really gets driven home whenever Elon Musk announces that DOGE is purging a government agency and installing AI in its place. There are some noteworthy parallels here to the first “techlash” that started around Trump’s first election—when many people turned on companies like Facebook and Google for their perceived role in helping Trump get elected—though in this new iteration of that “techlash” the target isn’t social media companies and social media but AI companies and AI.
To be completely clear, regardless of the specific reason why people are souring on AI, this is certainly a welcome shift. Though the exact reason undergirding this newfound skepticism will likely determine just how serious or lasting this opposition will truly be.
And yet, alongside this shift, another noteworthy one has taken place. It can be seen in the sorts of responses that are offered to the growing anti-AI sentiment—and this can be seen playing out in normal conversations as well as the ones that take place in online spaces.
It usually looks something like this: a person voices some version of the sentiment that “AI is bad” (often in direct response to some headline, or AI generated image). Someone else retorts that generative AI may be bad, or this particular example of AI may be bad, or this particular company may be bad, but AI is also being used for important medical research so it isn’t fair to really say that “AI is bad.” Thus begins a back and forth where the first person either has to clarify that they mean “generative AI is bad” or they have to double-down—in the former case the AI advocate has triumphed by defanging the “AI is bad” critique, and in the latter case the AI advocate gets to keep hitting back by talking about AI’s wonderful potential (even as they acknowledge that generative AI is crummy) thereby making the “AI is bad” person appear so blinkered by their opposition that they cannot appreciate the positive potential.
This can provide an impressive simulacrum of an actual debate or a serious discussion, alas it is anything but. Raising the matter of “yes, but what about AI in medical research” is almost never brought up in order to argue that AI should be permitted in some areas and not permitted in others, rather it is being brought up as a clever way of stifling critiques altogether. Under the guise of engaging in a debate around AI’s merits it is a point meant to foreclose the possibility of real debate. For the “yes, but what about AI in medical research” tends to eventually boil down into some version of “in order for AI to be successful in that area we need to keep building all of these other AI systems as the innovations there will eventually feed into AI for medical research” (and so forth).
The point here is not to directly respond to or engage with the arguments about “AI for medical research” or with the other attempts to defend AI by trying to separate “AI’s positive potential” from “look at this clip AI made of a statue dancing.” The point is to argue that as AI discourse shifts, and as AI advocates find themselves on the defensive, that it is vital to understand the function that all of those “but what about AI’s positive potential!?!?!?” arguments actually serve.
And one should understand those arguments as the latest iteration of what the thinker Lewis Mumford once referred to as “the magnificent bribe.”
The Magnificent Bribe
Why is it that so many people seem so unwilling to recognize the dangers inherent in large technological systems?
That is a question which social critics, activists, and scholars have wrestled with for a long time, and which social critics, activists, and scholars continue to wrestle with today. It is a question that is often paired with an almost head-in-hands exasperation as to how it is that so many people simply cannot see what is actually happening. And it is the sort of question that often involves a certain sense that many people simply don’t fully understand what is happening. It is a question which the twentieth-century thinker Lewis Mumford struggled with throughout his prolific body of work—in which he would often use the term “megatechnics” or “the megamachine” as a shorthand for describing the large technological systems that he believed were gaining greater and greater power over the course of the twentieth century. And in mulling over that question, Mumford did not take the stance that most people are hopeless rubes, or ignorant dupe