This blog post is co-authored by Noah and roon. roon is a researcher at a prominent AI company, and he also posts humorously on Twitter. Because this is a joint post, we sometimes refer to one of us in the third person.
If you talk to people about the potential of artificial intelligence, almost everybody brings up the same thing: the fear of replacement. For most people, this manifests as a dread certainty that AI will ultimately make their skills obsolete. For those who actually work on AI, it usually manifests as a feeling of guilt – guilt over creating the machines that put their fellow humans out of a job, and guilt over an imagined future where they’re the only ones who are gainfully employed.
In recent months, those uneasy feelings have intensified, as investment and innovation in generative AI have exploded. A relatively new innovation in machine learning called diffusion models brought text-to-image generation to maturity. A wave of AI art applications like Midjourney and Stable Diffusion have made a huge splash, and Stability AI has raised $101 million. Meanwhile, Jasper, a company that uses AI to generate written content, raised $125 million. In an era when much of the tech industry seems to be down in the dumps, AI is experiencing a golden age. And this has lots of people worried.
To put it bluntly, we think the fear, and the guilt, are probably mostly unwarranted. No one knows, of course, but we suspect that AI is far more likely to complement and empower human workers than to impoverish them or displace them onto the welfare rolls. This doesn’t mean we’re starry-eyed Panglossians; we realize that this optimistic perspective is a tough sell, and even if our vision comes true, there will certainly be some people who lose out. But what we’ve seen so far about how generative AI works suggests that it’ll largely behave like the productivity-enhancing, labor-saving tools of past waves of innovation.
If AI causes mass unemployment among the general populace, it will be the first time in history that any technology has ever done that. Industrial machinery, computer-controlled machine tools, software applications, and industrial robots all caused panics about human obsolescence, and nothing of the kind ever came to pass; pretty much everyone who wants a job still has a job. As Noah has written, a wave of recent evidence shows that adoption of industrial robots and automation technology in general is associated with an increase in employment at the company and industry level.
That’s not to say it couldn’t happen, of course – sometimes technology does totally new and unprecedented things, as when the Industrial Revolution suddenly allowed humans to escape Malthusian poverty for the first time. But it’s important to realize exactly why the innovations of the past didn’t result in the kind of mass obsolescence that people feared at the time.
The reason was that instead of replacing people entirely, those technologies simply replaced some of the tasks they did. If, like Noah’s ancestors, you were a metalworker in the 1700s, a large part of your job consisted of using hand tools to manually bash metal into specific shapes. Two centuries later, after the advent of machine tools, metalworkers spent much of their time directing machines to do the bashing. It’s a different kind of work, but you can bash a lot more metal with a machine.
Economists have long realized that it’s important to look at labor markets not at the level of jobs, but at the level of tasks within a job. In their excellent 2018 book Prediction Machines, Ajay Agrawal, Joshua Gans, and Avi Goldfarb talk about the prospects for predictive AI – the kind of AI that autocompletes your Google searches. They offer the possibility that this tech will simply let white-collar workers do their jobs more efficiently, similar to what machine tools did for blue-collar workers.
Daron Acemoglu and Pascual Restrepo have a mathematical model of this (here’s a more technical version), in which they break jobs down into specific tasks. They find that new production technologies like AI or robots can have several different effects. They can make workers more productive at their existing tasks. They can shift human labor toward different tasks. And they can create new tasks for people to do. Whether workers get harmed or helped depends on which of these effects dominates.
In other words, as Noah likes to say, “Dystopia is when robots take half your jobs. Utopia is when robots take half your job.”
You don’t need a fancy mathematical model, however, to understand the basic principle of comparative advantage. Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at.
Now think about this in the context of AI. Some people think that the reason previous waves of innovation didn’t make humans obsolete was that there were some things humans still did better than machines – e.g. writing. The fear is that AI is different, because the holy grail of AI research is something called “general intelligence” – a machine mind that performs all tasks as well as, or better than, the best humans. But as we saw with the example of Marc and the secretary, just because you can do everything better doesn’t mean you end up doing everything! Applying the idea of comparative advantage at the level of tasks instead of jobs, we can see that there will always be something for humans to do, even if AI would do those things better. Just as Marc has a limited number of hours in the day, AI resources are limited too