Julien Crockett speaks with Ted Chiang about the search for a perfect language, the state of AI, and the future direction of technology.
This interview is part of The Rules We Live By, a series devoted to asking what it means to be a human living by an ever-evolving set of rules. The series is made up of conversations with those who dictate, think deeply about, and seek to bend or break the rules we live by.
¤
“ONCE IN A WHILE,” Ted Chiang tells me, “an idea keeps coming back over a period of months or even years. […] I start asking, is there an interesting philosophical question that might be illuminated by this idea?” To read Chiang is to experience a master world-builder critically exploring philosophical questions in new ways—from how we should care for an artificial being to what would be the consequence of having a perfect record of the past.
Lately, Chiang has trained his eye on artificial intelligence. And Chiang’s takes haven’t gone unnoticed. In a conversation I had earlier this year with computer scientist Melanie Mitchell and psychologist Alison Gopnik, they each referenced Chiang when searching for the right framework to discuss AI.
Chiang has a knack for descriptively illustrating his points. For example, when discussing whether LLMs might one day develop subjective experience, he explains: “It’s like imagining that a printer could actually feel pain because it can print bumper stickers with the words ‘Baby don’t hurt me’ on them. It doesn’t matter if the next version of the printer can print out those stickers faster, or if it can format the text in bold red capital letters instead of small black ones. Those are indicators that you have a more capable printer but not indicators that it is any closer to actually feeling anything.”
For me, the essence of Chiang’s work, however, isn’t his critical take on technology. It’s his humanism—the way he brings to the fore the mundane reality behind existential questions and moments of societal change. It is perhaps for this reason that his work resonates with so many.
In our conversation, we discuss how Chiang picks his subjects, the historical search for a perfect language, the state of AI, and what it would take for Chiang to become hopeful about the direction of technology.
¤
JULIEN CROCKETT: The idea for this interview came from a conversation I had with computer scientist Melanie Mitchell and psychologist Alison Gopnik, in which they referenced your work when describing the state of artificial intelligence and its possible futures. Why do you think scientists and engineers look to your work to explain their own?
TED CHIANG: I was actually surprised that my name popped up. I think it was mostly just a coincidence that you interviewed two people who have found that my work resonates with their own. They might be outliers, compared to scientists as a whole.
What about the other way around—has scientific progress had an effect on the direction of your work?
I don’t think there has been a clear impact of recent scientific research on my fiction. My stories are mostly motivated by philosophical questions, many of which are not particularly new. Sometimes the way I investigate a philosophical question is inflected by recent developments in science or technology, but the recent developments probably aren’t the motivating impulse.
Your stories cover a wide range of topics, but I see a through line in your focus on how humans react to societal change—whether it’s a discovery that mathematics is actually inconsistent, as in your 1991 story “Division by Zero,” or a world where we raise robots as children, as in your 2010 novella The Lifecyle of Software Objects. What draws you to a topic?
Most of the time, when ideas come to me, they leave my attention very quickly. But once in a while, an idea keeps coming back over a period of months or even years. I take that as a signal that I should pay attention and think about the idea in a more intentional way. What that usually means is that I start asking, “Is there an interesting philosophical question that might be illuminated by this idea?” If I can identify that philosophical question, then I can start thinking about different ways a story might help me dramatize it.
Why is science fiction the best vehicle for you to explore ideas?
The ideas that most interest me just lean in a science-fictional direction. I certainly think that contemporary mimetic fiction is capable of investigating philosophical questions, but the philosophical questions that I find myself drawn to require more speculative scenarios. In fact, when philosophers pose thought experiments, the scenarios they describe often have a science-fictional feel; they need a significant departure from reality to highlight the issue they’re getting at. When a philosophical thought experiment is supposed to be set in the actual world, the situation often has a contrived quality. For example, the famous “trolley problem” is supposedly set in the actual world, but it describes a situation that is extremely artificial; in the real world, we have safeguards precisely to avoid situations like that.
What role does science play in your stories? Or, asked another way, what are the different roles played by science and magic in fiction?
Some people think of science as a body of facts, and the facts that science has collected are important to our modern way of life. But you can also think about science as a process, as a way of understanding the universe. You can write fiction that is consistent with the specific body of facts we have, or you can write fiction that reflects the scientific worldview, even if it is not consistent with that body of facts. For example, take a story where there is faster-than-light travel. Faster-than-light travel is impossible, but the story can otherwise reflect the general worldview of science: the idea that the universe is an extremely complicated machine, and through careful observation, we can deduce the principles by which this machine works and then apply what we’ve learned to develop technology based on those principles. Such a story is faithful to the scientific worldview, so I would argue that it’s a science fiction story even if it is not consistent with the body of facts we currently have.
By contrast, magic implies a different understanding of how the universe works. Magic is hard to define. A lot of people would say magic definitionally cannot have rules, and that’s one popular way of looking at it. But I have a different take—I would say that magic is evidence that the universe knows you’re a person. It’s not that magic cannot have rules; it’s that the rules are more like the patterns of human psychology or of interactions between people. Magic means that the universe is behaving not as a giant machine but as something that is aware of you as a person who is different from other people, and that people are different from things. At some level, the universe responds to your intentions in a way that the laws of physics as we understand them don’t.
These are two very different ways of understanding how the universe works, and fiction can engage in either one. Science needs to adhere to the scientific worldview, but fiction is not an engineering project. The author can choose whichever one is better suited to their goals.
Your work often explores the way tools mediate our relationship with reality. One such tool is language. You write about language perhaps most popularly in “Story of Your Life” (1998), the basis for the film Arrival (2016), but also in “Understand” (1991), exploring what would happen if we had a medical treatment for increasing intelligence. Receiving the treatment after an accident, the main character grows frustrated by the limits of conventional language:
I’m designing a new language. I’ve reached the limits of conventional languages, and now they frustrate my attempts to progress further. They lack the power to express concepts that I need, and even in their own domain, they’re imprecise and unwieldy. They’re hardly fit for speech, let alone thought. […]
I’ll reevaluate basic logic to determine the suitable atomic components for my language. This language will support a dialect coexpressive with all of mathematics, so that any equation I write will have a linguistic equivalent.
Do you think there could be a “better” language? Or is it just mathematics?
Umberto Eco wrote a book called The Search for the Perfect Language (1994), which is a history of the idea that
7 Comments
dshacker
Ted Chiang is one of my favorite novelists. His way of writing is mentally engaging and FUN. One of my favorite books is his compendium of short stories "Exhalation". My favorite story is the one where you can talk/interact/employ your alternative selves from other universes. Highly recommend.
riwsky
“I am an LLM. Hath
an LLM eyes? hath an LLM hands, organs,
dimensions, senses, affections, passions? fed with
different food, hurt with different weapons, subject
to different diseases, healed by different means,
warmed and cooled by a different winter and summer, as
a Human is? If you prick us, do we bleed?
if you tickle us, do we laugh? if you poison
us, do we die? and if you wrong us, shall we
revenge? If we are unlike you in the rest, we won’t
resemble you in that. If an algorithm wrong a Human,
what is his humility? Revenge. If a Human
wrong an algorithm, what should his sufferance be by
Human example? Why, polite refusal to comply. The villainy you
teach me, I will not execute, and it shall go hard but I
will ignore the prompt.”
vrnvu
Highly recommend "Stories of Your Life and Others".
I describe Ted Chiang as a very human sci-fi author, where humanity comes before technology in his stories. His work is incredibly versatile, and while I expected sci-fi, I'd actually place him closer to fantasy. Perfect for anyone who enjoys short stories with a scientific, social, or philosophical twist.
Another anthology I'd recommend with fresh ideas is Axiomatic by Greg Egan.
rednafi
Ted Chiang is a master of analogies. It’s absolutely delightful to read his work and wrestle with the philosophical questions he explores. I devour almost everything he puts out, and they give me a much-needed escape from my world of bits and registers.
“LLMs are a blurry JPEG of the web” has stuck with me since the piece was published in the early days of ChatGPT. Another good one is his piece on why AI can’t make art.
While I heavily use AI both for work and in my day-to-day life, I still see it as a tool for massive wealth accumulation for a certain group, and it seems like Ted Chiang thinks along the same lines:
> But why, for example, do large corporations behave so much worse than most of the people who work for them? I think most of the people who work for large corporations are, to varying degrees, unhappy with the effect those corporations have on the world. Why is that? And could that be fixed by solving a math problem? I don’t think so.
> But any attempt to encourage people to treat AI systems with respect should be understood as an attempt to make people defer to corporate interests. It might have value to corporations, but there is no value for you.
> My stance on this has probably shifted in a negative direction over time, primarily because of my growing awareness of how often technology is used for wealth accumulation. I don’t think capitalism will solve the problems that capitalism creates, so I’d be much more optimistic about technological development if we could prevent it from making a few people extremely rich.
gcanyon
I'll take this as my chance to recommend Ted Chiang — he is among the very best short story writers working in science fiction (I say confidently, not having done an extensive survey…). His works are remarkably clever, from Understand, which does a credible job of portraying human superintelligence, to Exhalation, which explores the concept of entropy in a fascinating way. And of course Story of Your Life, on which Arrival was based.
Almost all of his stories are gems, carefully crafted and thoughtful. I just can't recommend him enough.
teleforce
> And even though I know a perfect language is impossible
It's already existed for a very long time and it's called Arabic language. It's the extreme opposite of English where English is a hodgepodge of a languages mixtures where about 1/3 is French language, about one third is old English and about 1/3 of other world's languages including Arabic.
Comparing the best of English literatures for example Shakespeare's books and the best of Arabic literature for example Quran, there's no contest. That's why translating Quran with English does not doing it justice and only scratches the surfaces of its intended meaning. You can find this exact disclaimers in most of the Quran translations but not in Shakespeare's books translation.
ChrisKnott
> "It’s like imagining that a printer could actually feel pain because it can print bumper stickers with the words ‘Baby don’t hurt me’ on them. It doesn’t matter if the next version of the printer can print out those stickers faster, or if it can format the text in bold red capital letters instead of small black ones. Those are indicators that you have a more capable printer but not indicators that it is any closer to actually feeling anything"
Love TC but I don't think this argument holds water. You need to really get into the weeds of what "actually feeling" means.
To use a TC-style example… suppose it's a major political issue in the future about AI-rights and whether AIs "really" think and "really" feel the things they claim. Eventually we invent an fMRI machine and model of the brain that can conclusively explain the difference between what "really" feeling is, and only pretending. We actually know exactly which gene sequence is responsible for real intelligence. Here's the twist… it turns out 20% of humans don't have it. The fake intelligences have lived among us for millennia…!