Comment Would you trust your life to an artificial intelligence?
The current state of AI is impressive, but seeing it as bordering on generally intelligent is an overstatement. If you want to get a handle on how well the AI boom is going, just answer this question: Do you trust AI?
Google’s Bard and Microsoft’s ChatGPT-powered Bing large language models both made boneheaded mistakes during their launch presentations that could have been avoided with a quick web search. LLMs have also been spotted getting the facts wrong and pushing out incorrect citations.
It’s one thing when those AIs are just responsible for, say, entertaining Bing or Bard users, DARPA’s Matt Turek, deputy director of the Information Innovation Office, tells us. It’s another thing altogether when lives are on the line, which is why Turek’s agency has launched an initiative called AI Forward to try answering the question of what exactly it means to build an AI system we can trust.
Trust is …?
In an interview with The Register, Turek said he likes to think of building trustworthy AI with a civil engineering metaphor that also involves placing a lot of trussed trust in technology: Building bridges.
“We don’t build bridges by trial and error anymore,” Turek says. “We understand the foundational physics, the foundational material science, the system engineering to say, I need to be able to span this distance and need to carry this sort of weight,” he adds.
Armed with that knowledge, Turek says, the engineering sector had been able to develop standards that make building bridges straightforward and predictable, but we don’t have that with AI right now. In fact, we’re in an even worse place than simply not having standards: The AI models we’re building sometimes surprise us, and that’s bad, Turek says.
“We don’t fully understand the models. We don’t understand what they do well, we don’t understand the corner cases, the failure modes … what that might lead to is things going wrong at a speed and a scale that we haven’t seen before.”
Reg readers don’t need to imagine apocalyptic scenarios in which an artificial general intelligence (AGI) begins killing humans and w