Receive free Technology regulation updates
We’ll send you a myFT Daily Digest email rounding up the latest Technology regulation news every morning.
The UK government will host a summit on the safety of artificial intelligence at the start of November, with “like-minded” countries invited to the event in Bletchley Park to address global threats to democracy, including the use of AI in warfare and cyber security.
Leading academics and executives from AI companies, including Google’s DeepMind, Microsoft, OpenAI and Anthropic, will be asked to the AI Safety Summit at the Buckinghamshire site where British codebreakers were based during the second world war.
“The UK will host the first major global summit on AI safety this autumn,” a spokesperson for the government said on Wednesday, adding that Downing Street would set out further details in due course.
Prime minister Rishi Sunak initially announced in June the UK would be organising a summit on AI regulation after a meeting in Washington with President Joe Biden.
There has been anticipation and tension over the event’s details as the two nations have tried to find a date that would work for both leaders and avoid clashes with other events, people familiar with the preparations said.
The location, first reported by Bloomberg, was chosen, in part, for its significance in building some of the early mainframe computers in the world, which were used to decode encrypted messages during the second world war.
Bletchley Park is almost equidistant between Cambridge and Oxford, the two university cities that are integral to the government’s plans to bolster the UK’s tech capabilities.
The meeting will bring together “key countries, as well as leading technology companies and researchers, to drive targeted, rapid international action and to build on developing the regulatory guardrails we need for the safe and responsible development of AI”, the UK government spokesperson said on Wednesday.
World leaders with similar democratic values were expected to attend, the people familiar with the preparations said, adding that invitations had yet to be sent out.
The UK is keen to invite China, a leading AI power, but is concerned that attendees will not be able to find common ground on regulation, two people familiar with the thinking said. It is therefore considering a different forum for that discussion, they added.
The need for regulation of AI, and global co-ordination of such rules, has been hotly discussed as the development of products that feature generative AI has ramped up over the past nine months.
AI chatbots, such as OpenAI’s ChatGPT and Google’s Bard, and image generators use a form of the technology known as large language models, which can process and generate vast amounts of data.
The emergence of these products has led governments and tech companies to voice concern about the future dangers of the technology, including its risks to the workforce by displacing jobs through automation and to democracy by creating and spreading misinformation.
The summit will broadly address safety in artificial intelligence rather than solely on generative AI. The programme will include sessions on the ethics of using AI systems and guardrails to build around them, potential solutions to misinformation ahead of upcoming elections and the need for cyber security, such as designing secure AI software that can resist hackers and nefarious nation-state actors, the people said.
The use of AI in warfare, for instance the use of autonomous weapons, and the availability of the semiconductors widely used in AI products, are also on the draft agenda, they added.
Dame Wendy Hall, regius professor of computer science at the University of Southampton and co-chair of the UK government’s 2017 AI review, said the summit needed input from wider society, not just tech companies and governments.
“My worry about the summit is that the advice is coming mainly from the big tech companies themselves,” she said. “Is it right for the people making money out of this to be the people designing the regulation vehicle? You need voices other than the tech companies themselves in this,” she added.
How to ensure different regulations are interoperable between nations and provide the same safeguards will be at the core of discussions at the summit, which is taking place alongside the G7’s Hiroshima AI framework, a commitment hammered out in May between member states to co-ordinate global regulation.
Legislation to regulate AI has yet to be finalised in the UK, which announced a white paper in May setting out its aims, which would police the application of the technology with existing regulators responsible for breaches of the law.
The EU’s Artificial Intelligence Act is in its final stages but has been criticised by tech companies for being too strict, with amendments that would seek to regulate so-called foundation models and the underlying technology behind generative AI before they are applied to products.
Additional reporting by George Parker in London
Read the full article here