Receive free Non-Fiction updates
We’ll send you a myFT Daily Digest email rounding up the latest Non-Fiction news every morning.
“Humanity has entered a new era.” That is the bold claim with which quant investor Igor Tulchinsky and Cornell professor of genomics Christopher E Mason begin The Age of Prediction, their ambitious new survey of how predictive algorithms are changing the world.
Its publication could hardly be better timed. The public first became aware that something big was happening in the world of AI back in 2016, when Google DeepMind’s AlphaGo program defeated the reigning world champion at Go. Four years later, the company’s AlphaFold program solved one of the biggest puzzles in modern biology: the challenge of predicting the molecular structures into which proteins fold, based only on the sequences of their constituent amino acids.
Then late last year came the public release of OpenAI’s ChatGPT — the first of a flurry of Large Language Models that exhibit freakish abilities to sustain humanlike conversations, and have left the famous Turing test for dust. Even the experts can’t agree whether we should be tantalised or terrified by the machine learning revolution evidently under way.
But what exactly does it consist of? The Age of Prediction avoids getting bogged down in the debate over what actually counts as AI. Instead, it goes directly to the source and identifies the three big changes which in practice lie behind the startling achievements of the past decade.
The first of these is the tsunami of data that the digitisation of human life has generated. The second is the development of new statistical techniques able to discover patterns in this Big Data more effectively than ever before.
The third — and key to the timing of the revolution itself — has been the collapse in cost of the raw computing power that enables these techniques to be applied to those data driven by new hardware such as Nvidia’s V100 chip launched in 2017.
It is the convergence of these three developments that has transformed the predictive powers of statistical models.
As The Age of Prediction shows, the effects of that transformation are now all around us. The headline-grabbing examples already mentioned merely scratch the surface. With less fanfare but potentially much greater consequences, Big Data and automated algorithmic prediction are also turning fields as far apart as insurance, the arms industry and political campaigning on their heads.
A large proportion of the Big Data in question is genetic. Now that it is possible to sequence the 3bn base-pairs of a human genome for less than $200 in under eight hours using a machine no larger than a suitcase, genomics “can now be deployed at every crime scene, in every bedroom, toilet or turnstile”.
The algorithms developed by start-ups such as Craig Venter’s Human Longevity, meanwhile, claim to be able to predict an individual’s voice, height and facial features from trace DNA alone. That’s before supplementary non-human genetic information — such as microbes — offering further giveaways as to where a person has been and when has been sucked in to triangulate identification more precisely still.
The dystopian fantasy portrayed in the cult 1997 film Gattaca, in which individuals’ genomes are used to predict their futures and regulate their roles in society, is fast becoming a real possibility.
Contemplating this future leads the authors to confront a variety of problems and paradoxes. Can the new predictive modelling methods cope with the intrinsically adaptive nature of human behaviour, for example?
AlphaFold’s extraction of the formula that governs protein-folding, though a feat of astounding complexity, was in one sense simple, in that it could be certain that the amino acids would not react to its discovery by altering the way they combine.
Attempts to predict human behaviour, by contrast, have historically been plagued by moral hazard: humans’ ability to discern what predictions are being made and deliberately scheme to frustrate them.
Moreover, given the importance of Big Data to today’s predictive technologies, the authors identify a new risk — “privacy hazard”, you could call it — whereby the more invasive the algorithms are, the more individuals conceal their data in order to starve the programs of the raw material they need to function.
Yet the greatest paradox of all, the authors conclude, will arise if these obstacles are overcome. If everything becomes predictable, there will be no uncertainty and no such thing as free will. Then, as Tulchinsky and Mason ask, “How will we know that our predictive capacity is truly predictive or simply the elimination of all other paths to the future?”
It’s a new era, all right, but the fundamental questions are the oldest ones in the book.
The Age of Prediction: Algorithms, AI, and the Shifting Shadows of Risk by Igor Tulchinsky and Christopher E Mason, MIT Press £26/$27.95, 232 pages
Felix Martin is the author of ‘Money: The Unauthorised Biography’
Read the full article here