When Alexey Radul began graduate work at MIT’s Computer Science and Artificial Intelligence Lab in 2003, he was interested in natural-language processing — designing software that could understand ordinary written English. But he was so dissatisfied with the computer systems that natural-language researchers had to work with that, in his dissertation, he ended up investigating a new conceptual framework for computing. The work, which Radul is now pursuing as a postdoc in the lab of Gerald Sussman, the Matsushita Professor of Electrical Engineering, is still in its infancy. But it could someday have consequences for artificial-intelligence research, parallel computing and the design of computer hardware.
Artificial-intelligence systems, Radul explains, often tackle problems in stages. A natural-language program trying to make sense of a page of written text, for instance, first determines where words and sentences begin and end; then it identifies each word’s probable part of speech; then it diagrams the grammatical structure of the sentences. Only then does it move on to stages with names like “scope resolution” and “anaphora.” The process might have a dozen stages in all.
In a multistage process, however, errors compound from stage to stage. “Even if they’re really good stages, they’re 95 percent,” Radul says. “Ninety-five percent is considered extraordinary.” If each stage is 95 percent accurate, a five-stage process is 77 percent accurate; a 20-stage process — by no means unheard-of in AI research — is only 36 percent accurate.
Systems that can feed information from later stages back to earlier stages can correct compounding errors, but they’re enormously complicated, and building them from scratch is prohibitively time consuming for most researchers. A few such single-purpose syst