SERIES
2021 in Review
By Bill Andrews
December 23, 2021
Mathematicians and computer scientists answered major questions in topology, set theory and even physics, even as computers continued to grow more capable.
Mathematicians and computer scientists had an exciting year of breakthroughs in set theory, topology and artificial intelligence, in addition to preserving fading knowledge and revisiting old questions. They made new progress on fundamental questions in the field, celebrated connections spanning distant areas of mathematics, and saw the links between mathematics and other disciplines grow. But many results were only partial answers, and some promising avenues of exploration turned out to be dead ends, leaving work for future (and current) generations.
Topologists, who had already had a busy year, saw the release of a book this fall that finally presents, comprehensively, a major 40-year-old work that was in danger of being lost. A geometric tool created 11 years ago gained new life in a different mathematical context, bridging disparate areas of research. And new work in set theory brought mathematicians closer to understanding the nature of infinity and how many real numbers there really are. This was just one of many decades-old questions in math that received answers — of some sort — this year.
But math doesn’t exist in a vacuum. This summer, Quanta covered the growing need for a mathematical understanding of quantum field theory, one of the most successful concepts in physics. Similarly, computers are becoming increasingly indispensable tools for mathematicians, who use them not just to carry out calculations but to solve otherwise impossible problems and even verify complicated proofs. And as machines become better at solving problems, this year has also seen new progress in understanding just how they got so good at it.
Grace Park for Quanta Magazine
Preserving Topology
It’s tempting to think that a mathematical proof, once discovered, would stick around forever. But a seminal topology result from 1981 was in danger of being lost to obscurity, as the few remaining mathematicians who understood it grew older and left the field. Michael Freedman’s proof of the four-dimensional Poincaré conjecture showed that certain shapes that are similar in some ways (or “homotopy equivalent”) to a four-dimensional sphere must also be similar to it in other ways, making them “homeomorphic.” (Topologists have their own ways of determining when two shapes are the same or similar.) Fortunately, a new book called The Disc Embedding Theorem establishes in nearly 500 pages the inescapable logic of Freedman’s surprising approach and firmly establishes the finding in the mathematical canon.
Another recent major result in topology involved the Smale conjecture, which asks if the four-dimensional sphere’s basic symmetries are, basically, all the symmetries it has. Tadayuki Watanabe proved that the answer is no — more kinds of symmetries exist — and in doing so he kicked off a search for them, with new results appearing as recently as September. Also, two mathematicians developed “Floer Morava K-theory,” a framework that combines symplectic geometry and topology; the work establishes a new set of tools for approaching problems in those fields and, almost in passing, proves a new version of a decades-old problem called the Arnold conjecture. Quanta also explored the origins of topology itself with a column in January and an explainer devoted to the related subject of homology.
Olena Shmahalo/Quanta Magazine
Opening AI’s Black Box
Whether they’re helping mathematicians do math or aiding in the analysis of scientific data, deep neural networks, a form of artificial intelligence built upon layers of artificial neurons, have become increasingly sophisticated and powerful. They also remain mysterious: Traditional machine learning theory says their huge numbers of parameters should result in overfitting and an inability to generalize, but clearly something else must be happening. It turns out that older and better-understood machine learning models, called kernel machines, are mathematically equivalent to idealized versions of these neural networks, suggesting new ways to understand — and take advantage of — the digital black boxes.
But there have been setbacks, too. Related kinds of AI known as convolutional neural networks have a very hard time distinguishing between similar and different objects, and there’s a good chance they always will. Likewise, recent work has shown that gradient descent — an algorithm useful for training neural networks and performing other computational tasks — is a fundamentally difficult problem, mean