For decades, Silicon Valley anticipated the moment when a new technology would come along and change everything. It would unite human and machine, probably for the better but possibly for the worse, and split history into before and after.
The name for this milestone: the Singularity.
It could happen in several ways. One possibility is that people would add a computer’s processing power to their own innate intelligence, becoming supercharged versions of themselves. Or maybe computers would grow so complex that they could truly think, creating a global brain.
In either case, the resulting changes would be drastic, exponential and irreversible. A self-aware superhuman machine could design its own improvements faster than any group of scientists, setting off an explosion in intelligence. Centuries of progress could happen in years or even months. The Singularity is a slingshot into the future.
Artificial intelligence is roiling tech, business and politics like nothing in recent memory. Listen to the extravagant claims and wild assertions issuing from Silicon Valley, and it seems the long-promised virtual paradise is finally at hand.
Sundar Pichai, Google’s usually low-key chief executive, calls artificial intelligence “more profound than fire or electricity or anything we have done in the past.” Reid Hoffman, a billionaire investor, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Microsoft’s co-founder Bill Gates proclaims A.I. “will change the way people work, learn, travel, get health care and communicate with each other.”
A.I. is Silicon Valley’s ultimate new product rollout: transcendence on demand.
But there’s a dark twist. It’s as if tech companies introduced self-driving cars with the caveat that they could blow up before you got to Walmart.
“The advent of artificial general intelligence is called the Singularity because it is so hard to predict what will happen after that,” Elon Musk, who runs Twitter and Tesla, told CNBC last month. He said he thought “an age of abundance” would result but there was “some chance” that it “destroys humanity.”
The biggest cheerleader for A.I. in the tech community is Sam Altman, chief executive of OpenAI, the start-up that prompted the current frenzy with its ChatGPT chatbot. He says A.I. will be “the greatest force for economic empowerment and a lot of people getting rich we have ever seen.”
But he also says Mr. Musk, a critic of A.I. who also started a company to develop brain-computer interfaces, might be right.
Image

Apocalypse is familiar, even beloved territory for Silicon Valley. A few years ago, it seemed every tech executive had a fully stocked apocalypse bunker somewhere remote but reachable. In 2016, Mr. Altman said he was amassing “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.” The coronavirus pandemic made tech preppers feel vindicated, for a while.
Now, they are prepping for the Singularity.
“They like to think they’re sensible people making sage comments, but they sound more like monks in the year 1000 talking about the Rapture,” said Baldur Bjarnason, author of “The Intelligence Illusion,” a critical examination of A.I. “It’s a bit frightening,” he said.
The roots of transcendence
The Singularity’s intellectual roots go back to John von Neumann, a pioneering computer scientist who in the 1950s talked about how “the ever-accelerating progress of technology” would yield “some essential singularity in the history of the race.”
Image
Irving John Good, a British mathematician who helped decode the German Enigma device at Bletchley Park during World War II, was also an influential proponent. “The survival of man depends on the early construction of an ultra-intelligent machine,” he wrote in 1964. The director Stanley Kubrick consulted Mr. Good on HAL, the