Whoops.
Four hours before I published last week’s post—which argued that, because LLMs are so expensive to develop, OpenAI shouldn’t be an iPhone but should instead be AWS—Databricks announced that they had created a chatbot that could be built cheaply and in under 30 minutes.
Their bot wasn’t as good as the one OpenAI had that was backed by GPT-3—which isn’t as good as one using GPT-4—but it was still, compared to what most people thought chatbots was capable of in the middle of last year, a huge leap forward.
People had questions. Does this change everything? Does that mean that LLMs are becoming commodities? Will every company train their own? Is OpenAI’s strategy of launching an app store—and using an ecosystem (or regulation) as its moat, rather than a technology—the right strategy? Will open source models ultimately win out?
I have no idea—clearly, this blog is a lousy source of the latest news on AI.
However, it did raise one interesting question for me about where all of this is headed: Are we going to have labor markets for LLMs?
Historically, economists have divided the labor market into two groups: Low-skilled workers and high-skilled workers. As the name suggests, low-skilled workers are people who hold jobs that don’t typically require advanced degrees or special training, like service workers, taxi drivers, construction workers, and so on. High-skilled labor, by contrast, are lawyers, doctors, engineers, electricians, or any other profession that takes a long time to develop the ability to do.
Though the terminology is problematic, the concepts provide a useful sketch of how labor markets work. Jobs that require what are perceived as more generic skills get paid low wages, whereas those that require a lot of expertise can usually command higher wages. And jobs that require extreme specialization—say, a heart surgeon over a primary care physician—are paid an even greater premium. Very roughly, the more a person invests in acquiring skills, the more expensive their labor is.
The introduction of Dolly suggests that LLMs and other generative AI models are also “skilled,”
and could be divided along the same crude lines that economists use to divide the labor force. Dolly is low-skill. You might not trust it to send an important email to your boss, but you’d probably be fine with it making a restaurant reservation for you. GPT-4 is high-skill—it might actually be good enough for that email to your boss. And companies will surely develop specialized models that extend GPT-4 (or GPT-5, or GPT-6S Plus) with specific training data, and are particularly good at creative writing, or molecular chemistr