A schism is emerging in the scientific enterprise. On the one side is the human mind, the source of every story, theory and explanation that our species holds dear. On the other stand the machines, whose algorithms possess astonishing predictive power but whose inner workings remain radically opaque to human observers. As we humans strive to understand the fundamental nature of the world, our machines churn out measurable, practical predictions that seem to extend beyond the limits of thought. While understanding might satisfy our curiosity, with its narratives about cause and effect, prediction satisfies our desires, mapping these mechanisms on to reality. We now face a choice about which kind of knowledge matters more – as well as the question of whether one stands in the way of scientific progress.
Until recently, understanding and prediction were allies against ignorance. Francis Bacon was among the first to bring them together in the early days of the scientific revolution, when he argued that scientists should be out and about in the world, tinkering with their instruments. This approach, he said, would avoid the painful stasis and circularity that characterised scholastic attempts to get to grips with reality. In his Novum Organum (1620), he wrote:
Our new method of discovering the sciences is such as to leave little to the acuteness and strength of wit, and indeed rather to level wit and intellect. For as in the drawing of a straight line, or accurate circle by the hand, much depends on its steadiness and practice, but if a ruler or compass be employed there is little occasion for either; so it is with our method.
Bacon proposed – perfectly reasonably – that human perception and reason should be augmented by tools, and by these means would escape the labyrinth of reflection.
Isaac Newton enthusiastically adopted Bacon’s empirical philosophy. He spent a career developing tools: physical lenses and telescopes, as well as mental aids and mathematical descriptions (known as formalisms), all of which accelerated the pace of scientific discovery. But hidden away in this growing dependence on instruments were the seeds of a disconcerting divergence: between what the human mind could discern about the world’s underlying mechanisms, and what our tools were capable of measuring and modelling.
Today, this gap threatens to blow the whole scientific project wide open. We appear to have reached a limit at which understanding and prediction – mechanisms and models – are falling out of alignment. In Bacon and Newton’s era, world accounts that were tractable to a human mind, and predictions that could be tested, were joined in a virtuous circle. Compelling theories, backed by real-world observations, have advanced humanity’s understanding of everything from celestial mechanics to electromagnetism and Mendelian genetics. Scientists have grown accustomed to intuitive understandings expressed in terms of dynamical rules and laws – such as Charles Darwin’s theory of natural selection, or Gregor Mendel’s principle of independent assortment, to describe how an organism’s genome is propagated via the separation and recombination of its parents’ chromosomes.
But in an age of ‘big data’, the link between understanding and prediction no longer holds true. Modern science has made startling progress in explaining the low-hanging fruit of atoms, light and forces. We are now trying to come to terms with the more complex world – from cells to tissues, brains to cognitive biases, markets to climates. Novel algorithms allow us to forecast some features of the behaviour of these adaptive systems that learn and evolve, while instruments gather unprecedented amounts of information about them. And while these statistical models and predictions often get things right, it’s nearly impossible for us to reconstruct how they did it. Instrumental intelligence, typically a machine intelligence, is not only resistant but sometimes actively hostile to reason. Studies of genomic data, for example, can capture hundreds of parameters – patient, cell-type, condition, gene, gene location and more – and link the origin of diseases to thousands of potentially important factors. But these ‘high-dimensional’ data-sets and the predictions they provide defy our best ability to interpret them.
If we could predict human behaviour with Newtonian and quantum models, we would. But we can’t. It’s this honest confrontation between science and complex reality that produces the schism. Some critics claim that it’s our own stubborn anthropocentrism – an insistence that our tools yield to our intelligence – that’s impeding the advancement of science. If only we’d quit worrying about placating human minds, they say, we could use machines to accelerate our mastery over matter. A computer simulation of intelligence need not reflect the structure of the nervous system, any more than a telescope reflects the anatomy of an eye. Indeed, the radio telescope provides a compelling example of how a radically novel and non-optical mechanism can exceed a purely optical function, with radio telescopes able to detect other galaxies that lie beyond the line of sight of the Milky Way.
The great divergence between understanding and prediction echoes Baruch Spinoza’s insight about history: ‘Schisms do not originate in a love of truth … but rather in an inordinate desire for supremacy.’ The battle ahead is whether brains or algorithms will be sovereign in the kingdom of science.
Paradoxes and their perceptual cousins, illusions, offer two intriguing examples of the tangled relationship between prediction and understanding. Both describe situations where we thought we understood something, only to be confronted with anomalies. Understanding is less well-understood than it seems.
Some of the best-known visual illusions ‘flip’ between two different interpretations of the same object – such as the face-vase, the duck-rabbit and the Necker cube (a wireframe cube that’s perceived in one of two orientations, with either face nearest to the viewer). We know that objects in real life don’t really switch on a dime like this, and yet that’s what our senses are telling us. Ludwig Wittgenstein, who was obsessed with the duck-rabbit illusion, suggested that one sees an object secondarily following a primary interpretation, as opposed to understanding an object only after it has been seen. What we see is what we expect to see.
The cognitive scientist Richard Gregory, in his wonderful book Seeing Throu