North Texas’ resident tech genius, John Carmack, is taking aim now at his most ambitious target: solving the world’s biggest computer-science problem by developing artificial general intelligence. That’s a form of AI whose machines can understand, learn, and perform any intellectual task that humans can do.
Inside his multimillion-dollar manse on Highland Park’s Beverly Drive, Carmack, 52, is working to achieve AGI through his startup Keen Technologies, which raised $20 million in a financing round in August from investors including Austin-based Capital Factory.
This is the “fourth major phase” of his career, Carmack says, following stints in computers and pioneering video games with Mesquite’s id Software (founded in 1991), suborbital space rocketry at Mesquite-based Armadillo Aerospace (2000-2013), and virtual reality with Oculus VR, which Facebook (now Meta) acquired for $2 billion in 2014. Carmack stepped away from Oculus’ CTO role in late 2019 to become consulting CTO for the VR venture, proclaiming his intention to focus on AGI. He left Meta in December to concentrate full-time on Keen.
We sat down with the tech icon during a rare break in his work to conduct the following exclusive interview—a frank conversation that took Dallas Innovates weeks to arrange. The Q&A has been edited for length and clarity.
What sort of work are you doing now to ‘solve’ artificial general intelligence, John, and why are you taking your particular approach?
I sit here at my computer all the time, thinking up concepts, documenting them, making theories, testing them. That’s the work right now, as nobody really knows the full path all the way to where we want to go. But I think I’ve got as good a shot at it as anyone, for a number of reasons.
“Some people have raised billions to pursue this. And while that’s interesting in some ways, and there are signs there are extremely powerful things possible now in the narrow machine-learning stuff, it’s not clear those are the necessary steps to get all the way to artificial general intelligence.”
Some people have raised billions of dollars to pursue this. And while that’s interesting in some ways, and there are signs that extremely powerful things are possible right now in the narrow machine-learning stuff, it’s not clear that those are the necessary steps to get all the way to artificial general intelligence. For companies that are happy to do that, it’s not a bad bet, because there are plenty of off-ramps where there are valuable things, even if you don’t get all the way. There’s still stuff that’s going to change the world, like the narrow AI.
But it’s a worry that if you just take the first off-ramp and say, ‘Hey, there’s a billion-dollar off-ramp right here’—where we know we can just go take what we understand and revolutionize various industries. That becomes a very tempting thing to do, but it distracts everyone from looking further ahead and focusing on the big far distance stuff. So, I’m in a position where I can be really blunt about what I’m doing, and that is: zero near-term business opportunities.
What compelled your interest in the topic in the first place?
We’re in the midst of a scientific revolution right now, because 10 years ago, there was not the sense that AI was working. We’ve had these AI ‘winters’—a couple of them over the decades, in fact. It’s funny, because virtual reality also went through this: It had almost been a bad word because it had crashed so bad in the 1990s, people didn’t even want to talk about it.
And artificial intelligence had a couple of those cycles where hype spins up, money flows in, it underperforms, and then it crashes, and nobody wants to talk about it. But this last decade was different, and people who don’t notice how different it is this time aren’t paying attention, because the number of absolutely astounding things that have happened in the last decade with machine learning is really profound.
[Photo of John Carmack by Michael Samples]
That was the thing that really pushed me toward saying, ‘All right, it’s probably time to take a serious look at this.’ And it was interesting for me, because I had a technical bystander’s understanding of machine learning and AI, where I had read some of the seminal books even in my teenage years, and I knew about symbolic AI and all those types of things. So, my brain knew a little bit about these things, but I wasn’t following what was going on because I was busy with my work with the games, the aerospace, and the virtual reality.
You get to that point where you recognize, ‘Okay, I think there’s probably something here I need to sort out—what is hype versus what is the reality?’ So I did what I usually do: All of my real abilities have always come from understanding things fundamentally, at the very deepest levels, where there are insights that you only get from knowing how things happen from the very bottom.
So, about four years ago, I went on one of my week-long retreats, where I just take a computer and a stack of reference materials and I spend a week kind of reimplementing the fundamentals of the industry. And getting to the point where it’s like, ‘All right, I understand this well enough to have a serious conversation with a researcher about it.’ And I was pretty excited about getting to that level of understanding.
“So I asked Ilya Sutskever, OpenAI’s chief scientist, for a reading list. He gave me a list of like 40 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today.’ And I did. I plowed through all those things and it all started sorting out in my head.”
Then after that, Sam Altman of OpenAI invited me to a conference—the Y Combinator’s YC 120—and, while historically I never go to these types of things (because of my hermit tendencies), this time I decided to attend. It turned out it was really orchestrated by Sam to jump me for a job pitch, because he had Greg Brockman and Ilya Sutskever of OpenAI come and try to get me to go to OpenAI. I was pretty flattered by that, because I was not a machine learning expert by any means. I was a well-known systems engineer on a lot of this stuff, but I only had this basic baseline knowledge [of AI]. So, the idea that they were the leaders in the field, and they thought I was worth trying to get there, that really planted the seed to make me think about the importance of everything that’s going on and what role I could play in it.
So I asked Ilya, their chief scientist, for a reading list. This is my path, my way of doing things: give me a stack of all the stuff I need to know to actually be relevant in this space. And he gave me a list of like 40 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today.’ And I did. I plowed through all those things and it all started sorting out in my head.
John Carmack in his VR room. [Photo: Michael Samples]
You were still with Meta at this time working on virtual reality, right?
Yes, and I was having some real issues at Meta with large-scale strategic directions. I’m sure you’ve seen some of the headlines about how much money they’re spending, and I thought large fractions were really poorly spent. I was having some challenges there, and I was at the end of my five-year buyout contract from when Oculus was acquired. That was when I decided, ‘Okay, I’m going to get more serious about this artificial general intelligence work.’
All the things that I’d done before—in games, rockets, virtual reality—I was aiming for something that wasn’t there yet, but I had a clear line of sight. It’s different for AGI, though, because nobody knows how to do it. It’s not a simple matter of engineering. But there are all these tantalizing clues given what happened in this last decade—it’s like a handful of relatively simple ideas. They’re not these extreme black-magic mathematical wizardries—a lot of them are rel