
AI makes the humanities more important, but also weirder by findhorn
Writing recently in The New Yorker, the historian of science D. Graham Burnett described how he has been thinking about AI:
In one department on campus, a recently drafted anti-A.I. policy, read literally, would actually have barred faculty from giving assignments to students that centered on A.I. (It was ultimately revised.) Last year, when some distinguished alums and other worthies conducted an external review of the history department, a top recommendation was that we urgently address the looming A.I. disruptions to our teaching and research. This suggestion got a notably cool reception. But the idea that we can just keep going about our business won’t do, either.
On the contrary, staggering transformations are in full swing. And yet, on campus, we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. It’s time to talk about what all this means for university life, and for the humanities in particular.
I suspect that a significant chunk of my historian colleagues had a negative reaction to this article. But I wholeheartedly agree with the central point Burnett makes within it — not that generative AI is inherently good, but simply that it is already transformative for the humanities, and that this fact cannot be ignored or dismissed as hype.
Here’s how I’m currently thinking about that transformation.
Ignoring the impact of AI on humanistic work is not just increasingly untenable. It is also foolish, because humanistic knowledge and skills are central to what it is that AI language models actually do.
The language translation, sorting, and classification abilities of AI language models — the LLM as a “calculator for words” — are among the most compelling uses for the current frontier models. We’re only beginning to see these impacts in domains like paleography, data mining, and translation of archaic languages. I discussed some examples here:
… and the state of the art has progressed quite a bit since then. But since this is one aspect of AI and humanities I’ve written about at length, I’ll leave it to the side for now.
Another underrated change of the past few years is that humanistic skills have become surprisingly important to AI research itself.
One recent example: OpenAI’s initial fix for GPT-4o’s bizarre recent turn toward sycophancy was not a new line of code. It was a new piece of English prose. Here’s Simon Willison on the change to the system prompt that OpenAI implemented:
This was not the only issue that caused the problem. But the other factors in play (such as prioritizing user feedback via a “thumbs up” button) were similarly rooted in big-picture humanistic concerns like the impact of language on behavior, cross-cultural differences, and questions of rhetoric, genre, and tone.
This is fascinating to me. When an IBM mainframe system broke down in the 1950s (or a steam engine exploded in the 1850s), the people who had to fix it likely did not spare a moment’s thought to consider any of these topics.
Today, engineers working on AI systems also need to think deeply and critically about the relationship between language and culture and the history and philosophy of technology. When they fail to do so, their systems literally start to break down.
Then there’s the newfound ability of non-technical people in the humanities to write their own code. This is a bigger deal than many in my field seem to recognize. I suspect this will change soon. The emerging generation of historians will simply take it for granted that they can create their own custom research and teaching tools and deploy them at will, more or less for free.
My own efforts so far have mostly been focused on two niche educational games modeled on old school text-based adventures — not exactly something with a huge potential audience. But that’s exactly why I choose to do it. The stakes were low; the interest level for me personally was high; and I had significant expertise in the actual material and format, if not the code.
The progression from my first attempt (last fall) to my second (earlier this spring) has been an amazing learning experience.
Here’s the first game (you can find a free playable version here). It’s a 17th century apothecary simulator that requires students to read and utilize actual early modern medical recipes to heal patients based on real historical figures. You play as Maria de Lima, a semi-fictional female apothecary in 1680s Mexico City with a hidden past:
It was fascinating to make, but it also has significant bugs and usability issues, and it fairly quickly spools out into LLM-generated hallucinations that are unmoored by historical realities. (For instance, in one play-through, I, as Maria, was able to become a ship’s surgeon on a merchant vessel sailing to England, then meet with Isaac Newton in London. The famously quarrelsome and reclusive Newton was, for some reason, delighted to welcome me into his home for tea.)
My second attempt, a game where you play as a young Darwin collecting finches and other specimen on one of the Galápagos Islands in 1835, is more sophisticated and more stable.
The terrain-based movement system, with specific locations based directly on actual landscapes Darwin wrote about in his Voyage of the Beagle, forces the AI to main
16 Comments
antithesizer
It's quaint how there are people who think something called "the humanities" exerts some occult guiding influence on the world. Studying the humanities would disabuse them of that notion.
simonw
The first comment on that article caught my attention:
> I am a grad student in the philosophy department at SFSU and teach critical thinking. This semester I pivoted my entire class toward a course design that feels more like running an obstacle course with AI than engaging with Plato. And my students are into it.
It would be interesting to take a class of students and set them an assignment to come up with assignments for their fellow students that could not be completed using ChatGPT.
About ten years ago I went to a BarCamp conference where one of the events was a team quiz where the questions were designed to be difficult to solve using Google – questions like "What island is this?" where all you got was the outline of the island. It was really fun. Designing "ChatGPT-proof assignments" feels to me like a similar level of intellectual challenge.
quanto
> Today, engineers working on AI systems also need to think deeply and critically about the relationship between language and culture and the history and philosophy of technology. When they fail to do so, their systems literally start to break down.
Perhaps so. But not in the (quasi-)academic sense that the author is thinking. It's not the lack of an engineer's academic knowledge in history and philosophy that makes an AI system fail.
> Then there’s the newfound ability of non-technical people in the humanities to write their own code. This is a bigger deal than many in my field seem to recognize. I suspect this will change soon. The emerging generation of historians will simply take it for granted that they can create their own custom research and teaching tools and deploy them at will, more or less for free.
This is the lede buried deep inside the article. When the basic coding skill (or any skill) is commoditized, it's the people with complementary skills that benefit the most.
throw8393949
> My idea is that students will read Darwin’s writings first, then demonstrate what they learned via the choices they make in game. To progress, you must embody the real epistemologies and knowledge of a 19th century naturalist.
It is kind of funny, since modern sociology and humanities are in a deep denial of theory of evolution, Darwinism and any of its implications.
> producing a generation of students who have simply never experienced the feeling of focused intellectual work.
This was already blamed on phones. Some study said like 30% of university graduates are functionally illiterate; they are unable to read book, and hold its content in memory long enough to understand it!
bios444
[dead]
carbonguy
I spent some years as a teacher and so have some first-hand experience here; my take, for what it's worth, is that LLMs have indeed blown a gaping hole in the structure of education as actually practiced in the USA. That hole is: much of education is based on the assumption that the unsupervised production of written artifacts is proof of progress in learning. LLMs can produce those artifacts now, incidentally disrupting the paid essay-writing industry (one assumes).
From this, I agree with the article – since educators now have to figure out another hook to hang evaluation on, the questions "what the hell does it mean to 'learn', anyway?" and "how the hell do we meaningfully measure that 'learning', whatever it is?" have even more salience, and the humanities certainly have something meaningful to say on both of them. I'll (puckishly) predict that recitations and verbal examinations are going to make a comeback – harder to game at this point, but then, who knows how long 'this point' will last?
neilv
> > Maintain professionalism and grounded honesty that best represents OpenAI and its values.
I think a humanities person could tell you in an instant how that part of the system prompt would backfire catastrophically (in a near-future rogue-AI sci-fi story kind of way), exactly because of the way it's worded.
In that scenario, the fall of humanity before the machines wasn't entirely due to hubris. The ultimate trigger was a smarmy throwaway phrase, which instructed the Internet-enabled machine to gaze into the corporate soul of its creator, and emulate it. :)
paulpauper
When I was a postdoc at Columbia, I taught one of the core curriculum classes mentioned here, with a reading list that included over a dozen weighty tomes (one week was spent on the Bible, the next week on the Quran, another on Thomas Aquinas, and so on). There wasn’t much that was fun or easy about it.
This sums up the problem. It's not fun and the only thing that matters is the credential anyway , so unsurprisingly students are outsourcing to drudgery to AI. In the past it was CliffsNotes. The business model of college-to-career needs to be overhauled.
The humanities are important, yet at the same time, not everyone should be req. to study them. AI arguably makes it easier to learn, so think this is a welcome development.
Der_Einzige
Maybe when the humanities within the academy stop being overran by "grievance studies" blowhards. "Critical Theory" as it's taught by post-modernist ivy league elites is a net negative for society. Sokhal Affair never fixed the piss poor academic norms of the modern humanities.
I'll take the ensuing world of AI slop that we're about to further descend into over the last iteration of academia. Academia was so enshittified during the 60s-2010s era that I straight up believe that only 3 letter agencies could have made it like it turned into:
https://daily.jstor.org/was-modern-art-really-a-cia-psy-op/
https://thephilosophicalsalon.com/the-cia-reads-french-theor…
https://www.spyculture.com/cia-loved-french-new-left-philoso…
https://www.independent.co.uk/news/world/modern-art-was-cia-…
https://fightbacknews.org/articles/origins-and-development-p…
https://www.nybooks.com/articles/1967/04/20/the-cia-and-the-…
https://www.counterpunch.org/2017/03/24/the-cia-and-the-inte…
Animats
The author is talking mostly about teaching history. But what he describes as teaching history is more like history appreciation. It's not about how to use history as an aid to prediction. It's more about studying the "classics", from Cicero onwards.
Military officers study much history, but in a different way. They look for mistakes. Why did someone lose a battle or a war? Why did they even get into that particular battle? Why was the situation prior to WWI so strategically unstable that the killing of a minor player touched off a world war?
The traditional teaching of history tends to focus on the winners, especially if they write well.
Reading what the losers wrote, or what was written about them, can be more productive than reading the winners.
If you want to understand Cicero's era, for example, read [1]. This is a study of Cicero by an experienced and cynical newspaper political reporter. The author writes that too many historians take Cicero's speeches at face value. The author knows better, having heard many politicians of his own day.
This sort of approach tends to take one out of the area where LLMs are most useful, because, as yet, they're not that good at detecting sycophancy.
[1] https://archive.org/details/thiswasciceromod0000henr
bgwalter
Another day, another Substack ad by this "science historian", naturally with a quote by AI shill Simon Willison. The entire world rests on his shoulders.
Caelus9
I think that when a new era comes, we should choose to embrace it actively. Today, the humanities face unprecedented challenges, but they also contain new opportunities. The rise of artificial intelligence has undoubtedly subverted traditional teaching methods, but it also forces us to rethink a fundamental question: What is the real value of the humanities?
As a historian at Princeton University once pointed out, artificial intelligence can process data and generate text, but it can never replace human subjective experience, moral judgment, and our exploration of meaning, such as questions like “Who am I?”
In the final analysis, the value of the humanities may not be “useful” in the traditional sense, but in helping us better understand ourselves and the world we live in.
HexPhantom
When effort becomes optional, so does growth. And the quote from the student who sees college as networking with a side of AI-cheating? Painfully on the nose. There's a real risk of LLMs flattening the educational journey into a kind of performance optimization game
wisty
A lazy physics teacher can make every question a math question in disguise. If a physics teacher is worried that a better calculator will make their test trivial, maybe they need to teach physics instead of testing math.
A lazy humanities teacher can make every question a writing question in disguise. If a better spell checker will make a humanities assessment trivial, then maybe they need to teach humanities instead of testing essay writing.
I'm saying this in a kinda inflammatory way, but does the quality of ones ideas really correlate well with a well written essay?
dr_dshiv
What most people don’t know about history & humanities is how much work there is to do.
People get (rightly!) excited about decoding burnt scrolls in Herculaneum. But most don’t realize that less than 10% of Neo-Latin texts from the renaissance to the early modern period have been translated to English.
One example: Marsilio Ficino was a brilliant philosopher who was hired by Cosimo Medici to translate Greek texts (Plato, Plotinus, the Hermetica, etc) into Latin. This had a massive impact on the renaissance and European enlightenment. But many of Ficino’s own texts have not been translated to English!
LLMs will, of course, have a massive impact on this… but so can students! Any student can make a meaningful contribution if they care to. There is so much to be discovered and unpacked. Unfortunately, so much of the humanities in high school is treated as an exercise rather than real discovery. I judge my students now based on how much I learn from them. Why not?
seydor
Humans are mimetic creatures. The biggest damage (some cause it influence) that LLMs will do is that next generation will be talking like an LLM. They might even normalize hallucination