Doug Lenat was one of the most brilliant, acerbically funny people I have ever met. If people like Marvin Minsky, John McCarthy, and Alan Newell were among the first to think deeply about how symbolic AI, in which machines manipulate explicit verbal-like representations, might work, Doug was the first to try really hard to make it actually work. I have spent my whole career arguing for consilience between neural networks and symbolic AI, and on the strictly symbolic side of that equation, Lenat was light-years ahead of me, not just more deeply embedded in those trenches than I, but the architect of many of those trenches.
Lenat spent the last 40 years of his life launching and directing a project called Cyc, an intense effort to codify all of common sense in machine-interpretable form. Too few people thinking about AI today even know what that project is. Many who do, write it off as a failure. Cyc (and the parent company, Cycorp, that Lenat formed to incubate it) never exploded commercially – but hardly anybody ever gives it credit for the fact that it is still in business 40 years later; very very few AI companies have survived that long.
My own view is that Cyc has been neither a success nor a failure, but somewhere in between: I see it as a ground-breaking, clarion experiment that never fully gelled. No, Cyc didn’t set the world on fire, but yes, it will seem more and more important in hindsight, as we eventually make real progress towards artificial general intelligence.
Most young AI researchers have never even heard about it. But every single one of them should know something about Cyc. They don’t need to like it, but they should understand what it was, and what it tried to do, and what they might do instead to accomplish the same goals.
Not because Cyc will get used out of the box, as some sort of drop-in replacement for Large Language Models, but because what Lenat tried to do – to get machines to represent and reason about common sense — still must be done. Yejin Choi’s wonderful 2023 TED talk, Why AI is incredibly smart and shockingly stupid, followed directly in that tradition, explaining why common sense is still, despite their apparent success, lacking in current AI systems. (My 2019 book with Ernie Davis, Rebooting AI was very much on the same topic.)
Metaphorically, Lenat tried to find a path across the mountain of common sense, the millions of things we know about the world but rarely articulate. He didn’t fully succeed – we will need a differen