Receive free Biotech updates
We’ll send you a myFT Daily Digest email rounding up the latest Biotech news every morning.
The writer is head of biosecurity policy at the Centre for Long-Term Resilience, a UK think-tank
Researchers at MIT recently conducted an experiment. They asked undergraduate students to test whether AI-driven chatbots could be prompted to assist non-experts in causing a pandemic. Within an hour the chatbots had suggested four potential pandemic pathogens, explained how they could be created from synthetic DNA using reverse genetics, and supplied the names of DNA synthesis companies judged unlikely to screen orders.
Developing bioweapons is not that easy, and chatbot instructions currently only go so far, but the experiment shows what can happen when AI technology barrels through scientific knowledge. The troubling fact is that large language models and new biological design tools are dramatically lowering the barriers to engineering the next pandemic. The former Google CEO Eric Schmidt describes AI-designed bioweapons as “a very near-term concern”.
Governments are finally waking up to the scale of emerging biological risks. Last month the UK published its Biological Security Strategy, and committed £1.5bn in annual funding to counter the threat. Meanwhile, the Pentagon is concluding its first Biodefense Posture Review, assessing how prepared the US is to deal with bioweapons and future pandemics. Globally, 194 countries are currently negotiating a pandemic treaty, which will strengthen international resilience to biological events.
But more is required, and the UK now needs to move quickly on three fronts. The first undertaking must be to develop evaluations of large language models and biological design tools to determine capabilities and gauge the risks. These should be conducted by the Foundation Model Taskforce, the new body responsible for safe AI development in the UK. But they need biosecurity expertise, which the newly announced UK Biosecurity Leadership Council — which will convene academic and industry leaders with government — is well-placed to provide. Ultimately, we need to create a series of chokepoints to limit access to dangerous tools. These could include removing information about harmful biology from the training data for AI systems, inserting stringent content controls, and preventing the distribution of software used to design deadly biological agents.
Second, the UK needs to advance efforts to detect new pathogens rapidly in the event of a release. As the world leader in metagenomic sequencing — which offers the possibility of detecting previously unknown pathogens at the very beginning of outbreaks — the government could do much more both at home and abroad. The development of a National Biosurveillance Network will help provide an improved early warning system within Britain. But to achieve its full potential this must be linked to a global system that can sound the alarm on potential pandemics. This is where the power of AI can be harnessed to help with early detection, if distributed to countries with less-resourced healthcare systems.
Finally, we need to bring nations together to focus on the converging risk of AI and biotechnology. The UK will be hosting the world’s first summit on AI safety this autumn — as the most tangible near-term extreme risk presented by the technology, biosecurity needs to be on the agenda. Progress requires global coverage and should be linked to multilateral efforts, including the pandemic treaty.
As biosecurity specialists well know, recent technological developments have made our future prospects significantly darker. But at the same time, advances such as mass genetic sequencing and rapidly deployable mRNA vaccines could render bioweapons obsolete and end pandemics for good.
We have a narrow window during which to take targeted and effective action, and bring the world along. To act before the risks are upon us, our policymaking and statecraft will need to be just as good as our science.
Read the full article here