No wave of deepfakes expected in elections, but the risk remains

--
Image editing NOS
Buildings of the European Parliament (left) and the US Congress (right)

NOS Newstoday, 5:26 PM

  • Nando Kasteleijn

    editor Tech

  • Nando Kasteleijn

    editor Tech

At the beginning of this year, residents of the American state of New Hampshire received a call that appeared to come from President Biden: the speaker advised them not to vote in the primary election. Only the real election in November counted.

Only it wasn’t actually the president making the call, but an AI voice delivering the message, an example of one deepfake. The calls came from a company in Texas. That company is now under investigation for illegal activities.

Disinformation has been spread around elections for years, with many references being made to Russia in the past. 2024 is a special election year, more than half of the world’s population will go to the polls. The elections are already underway in India, Europe will follow early next month and the US this fall.

So the stakes are high, while it is becoming increasingly easier to create deepfakes, thanks to the rise of generative AI (artificial intelligence).

One deepfake is enough

“One AI-generated audio file, one or two days before the election, is enough to have an impact on the vote,” said Tomasso Canetta, who coordinates fact-checking at the European Digital Media Observatory. It already happened last year in Slovakia, to put the leader of the Liberal Party in a bad light.

According to Canetta, audio is currently the most problematic variant. With images created with AI, you often see (small) deviations in the image. That was clearly visible https://twitter.com/VanTongeren8/status/1723641850857246886 by Frans Timmermans that circulated on X last fall. The photo was clearly fake.

AI-made videos are not yet so good that they are indistinguishable from the real thing, although Sora, OpenAI’s text-to-video model, could change that. Right now, it’s often videos where the audio is fake and the lips are lip-synced.

“Audio deepfakes are the most harmful because the average user can’t easily recognize them, especially if you don’t pay close attention to conversational style and grammar,” Canetta says. He emphasizes that there are good ways to recognize these types of deepfakes, but they do not provide a 100 percent guarantee.

Listen to Joe Biden’s deepfake here. You first hear Biden’s real voice and then the fake one:

Fake indistinguishable from real: listen to a deepfake of Joe Biden here

Canetta’s organization produces monthly reports on the number of fact checks by European fact check organizations and also monitors how many of these were made with AI. In March, of the 1,729 fact-checked items, 87 were created with AI, or 5 percent.

Anyway, according to Canetta you don’t even need a large number. In any case, a deepfake can have an effect on voters. Tom Dobber, affiliated with the University of Amsterdam, also drew this conclusion together with other researchers after a test. They had a panel watch a deepfake video of American Democratic politician Nancy Pelosi, in which she justified the storming of the Capitol.

Democrats were more negative about Pelosi afterward. At the same time, Dobber says that making a direct connection between such an incident and an election result is very difficult.

Minor role

Luc van Bakel, fact-check coordinator at the Flemish broadcaster VRT, expects a limited role for deepfakes in the European elections in Belgium and the Netherlands. “It’s one of those things that is added, a new way that is added.”

Ultimately, disinformation gains momentum when it is spread widely, often through social media such as TikTok, Facebook, Instagram and X. “X is characterized by a large amount of disinformation,” says Canetta. “But I think other platforms still have a lot to do as well.”

In a response, TikTok and YouTube said they would remove videos that are misleading. TikTok also emphasizes that it is not always possible to properly recognize material that has been manipulated with AI. Meta (Facebook’s parent company) and X did not respond to questions from NOS.

Van Bakel of VRT also points to an undercurrent that is not publicly visible: private conversations in apps such as WhatsApp. He thinks that video circulates a lot on public social media and audio more in places where deepfakes are less likely to be noticed.


The article is in Dutch

Tags: wave deepfakes expected elections risk remains

-

PREV Schools need more knowledge about algorithms, otherwise ‘risk of discrimination’ | RTL News
NEXT ‘Reducing charging station capacity will become standard during peak hours’