Receive free Artificial intelligence updates
We’ll send you a myFT Daily Digest email rounding up the latest Artificial intelligence news every morning.
The writer is international policy director at Stanford University’s Cyber Policy Center and special adviser to The European Commission
Next year is being labelled the “Year of Democracy”: a series of key elections are scheduled to take place, including in places with significant power and populations, such as the US, EU, India, Indonesia and Mexico. In many of these jurisdictions, democracy is under threat or in decline. It is certain that our volatile world will look different after 2024. The question is how — and why.
Artificial intelligence is one of the wild cards that may well play a decisive role in the upcoming elections. The technology already features in varied ways in the electoral process — yet many of these products have barely been tested before their release into society.
Generative AI, which makes synthetic texts, videos and voice messages easy to produce and difficult to distinguish from human-generated content, has been embraced by some political campaign teams. A controversial video showing a crumbling world should Joe Biden be re-elected was not created by a foreign intelligence service seeking to manipulate US elections, but by the Republican National Committee.
Foreign intelligence services are also using generative AI to boost their influence operations. My colleague at Stanford, Alex Stamos, warns that: “What once took a team of 20 to 40 people working out of [Russia or Iran] to produce 100,000 pieces can now be done by one person using open-source gen AI”.
AI also makes it easier to target messages so they reach specific audiences. This individualised experience will increase the complexity of investigating whether internet users and voters are being fed disinformation.
While much of generative AI’s impact on elections is still being studied, what is known does not reassure. We know people find it hard to distinguish between synthetic media and authentic voices, making it easy to deceive them. We also know that AI repeats and entrenches bias against minorities. Plus, we’re aware that AI companies seeking profits do not also seek to promote democratic values.
Many members of the teams hired to deal with foreign manipulation and disinformation by social media companies, particularly since 2016, have been laid off. YouTube has explicitly said it will no longer remove “content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections”. It is, of course, highly likely that lies about past elections will play a role in 2024 campaigns.
Similarly, after Elon Musk took over X, formerly known as Twitter, he gutted trust and safety teams. Right when defence barriers are needed the most, they are being taken down.
At US universities, experts doing independent research into disinformation online have been subject to political attack. These efforts to undermine important work are deeply troubling. And in India, the largest country to go to the polls in 2024, civil society and journalists seeking to investigate electoral practices are under growing pressure, as well.
There are steps we can take to prevent this new technology from causing unpleasant surprises in 2024. Independent audits for bias and research into disinformation efforts must be supported. AI companies should offer researchers access to information that is currently hidden, such as content moderation decisions. International teams should study the elections taking place this year, such as those in the Netherlands, Poland and Egypt, to understand how AI plays a role.
When it comes to AI and elections, I believe we cannot be careful enough. Democracies are precious experiments, with a growing set of enemies. Let us hope that 2024 will indeed be the “Year of Democracy” — and not the year that marks its decisive decline.
Read the full article here