Deepfakes Aren't the Disinformation Threat They're Made Out to Be

commentary

Dec 19, 2023

Deep fake, AI and face swap in video edit, photo by Tero Vesalainen/Getty Images

Photo by Tero Vesalainen/Getty Images

This commentary originally appeared on Inside Sources on December 18, 2023.

The chief of Britain's domestic intelligence agency MI5 warns of danger on the horizon. Deepfakes, Ken McCallum said at a conference in California, are “a threat to democracy” with the potential to “cause all kinds of confusion and dissension and chaos in our societies.”

It's a warning often repeated—yet one that always seems just around the corner.

Deepfakes—imitative audio-visual content produced via deep learning techniques—have been on the radar of media watch organizations for years, and there's no doubt they can be convincing. In 2017, a researcher from the University of Washington shared an artificial video of a foul-mouthed Barack Obama, prompting tech experts to warn of an impending crisis if generative AI was left unchecked.

Since then, predictions about deepfakes' potential have come and gone. The U.S. presidential election was fractious, polarizing, and flooded with fakery—but ultimately uninfluenced by deepfakes. The technology underpinning deepfakes has developed rapidly. Generative AI has become far more publicly accessible, with advanced image generators available to anybody with an email address.

The technology seemed ripe to disrupt the Israel-Hamas conflict. Yet, convincing deepfakes have been curiously absent. Disinformation has been rife, but it has come in memes, stories, and videos taken out of context. For example, one influential video featured doctored footage of North Korea's Kim Jong-un speaking in 2020 with false captions added to make Kim blame Joe Biden for the war. Other videos were manually clipped, such as a swiftly refuted video in the style of a BBC News report, claiming that Ukraine was sending weapons to Hamas.

With all the effort and cost, it's often much more straightforward for those spreading disinformation to use more conventional forms of media.

Share on Twitter

The reason for this is partly the technology. Convincing celebrity deepfakes can give the impression that truly lifelike deepfakes are easy to make. Yet, researchers at RAND have shown that viral deepfake videos require huge amounts of resources—one popular video of “deepfake Tom Cruise” took months of training, expensive graphics processing units, and a skilled actor to mimic Cruise's mannerisms. With all this effort and cost, it's often much more straightforward for those spreading disinformation to use more conventional forms of media.

But even with these more advanced forms of generative AI, humans are remarkably skilled at spotting fakes. In a recent study at the Media Lab of the Massachusetts Institute of Technology, the leading deepfake detection model judged a convincing deepfake of Vladimir Putin to have an 8 percent chance of being artificial—versus 70 percent for participants. As the researchers explain, part of this is that participants are drawing on contextual information not available to the model. They make judgments on whether Putin would really act and speak as he does in the video, while a neural network does not.

Those setting out to deceive often have more luck with less-techy tools. However, focusing only on the potential of deepfakes to fool people, as most discussions of deepfakes do, implies that people engage with disinformation on a purely factual level.

But it does not always work. Donald Trump's claim that Barack Obama was not born in the United States and the Brexit campaign promise that the United Kingdom could save 350 million pounds a week by leaving the European Union—these were on-the-surface fact-based claims. Yet, they cut through not because of the weight of evidence behind them but from the deeper narratives and belief systems they represented.

This is backed up by research at the Centre for Research on Extremism and Security Threats, where experts found that media literacy had surprisingly little effect on whether people shared disinformation. Many people engage with deep fakes, knowing they are fake.

We can see this in the few deepfakes that have surfaced in the sea of disinformation surrounding Israel-Hamas. One popular deepfake featured thousands of football fans in Madrid, all waving a giant Palestinian flag. The quality was poor and over-saturated, resembling a video game. It makes little sense to engage with something so clearly false if the purpose of deepfakes is to mislead. But millions viewed the image because they found its underlying meaning heartening.

Deepfakes have made their way into state messaging campaigns. Russia Today, the state-controlled news network, published a deepfake video in June of various world leaders fretting over how to sanction Russia; Joe Biden snoozes on a desk, and Emmanuel Macron bangs his head against a cabinet. It clearly is not meant to deceive—“this media was generated using neural networks for the purposes of parody” runs underneath the opening scene, and the video is shot in a cinematic style. Few viewers will likely assume that Russia Today obtained over-the-shoulder footage of Olaf Scholz, the German chancellor, using ChatGPT to search for sanction ideas.

But, the insistence that deepfakes are a tool of deception limits our ability to understand such cases. “In a world where deepfake videos seamlessly blur the demarcation between reality and fiction,” said one journalist, “Moscow's latest propaganda foray wielded a metaphorical sledgehammer, obliterating any semblance of distinction.”

Such knee-jerk responses miss the point. The influence of deepfakes is not necessarily to make people believe something that is not true but to create engaging content as part of a broader narrative—just like many other forms of media.

To be clear, some deepfakes are plainly meant to deceive. An election in Slovakia was recently disrupted by a generated audio clip of leading politicians rigging the vote. It was swiftly called out as being fake, but the case does highlight how deepfakes could be used to disrupt. It raises the question of whether audio is more persuasive than visual content, and it shows how fakes could be deployed at opportune moments before they can be refuted.

But the effect of deepfakes was still minor. The politicians whose voices were dubbed stressed that the fake did not alter the course of the election—far more significant was the avalanche of conventional disinformation spread by Russian trolling operations and, far more prominent, by local politicians. One media watch organization flagged 345,000 election-related disinformation posts.

Deepfakes are a problem. But they are a drop in the ocean.

Share on Twitter

Deepfakes are a problem. But they are a drop in the ocean.

So, catastrophizing deepfakes misunderstands the message-driven way they are often used and misrepresents disinformation. It is the intelligence agency director's job to worry about technologies such as deepfakes, and rightly so. The effect of deepfakes on disinformation may rise. But concerns over generative AI too often stray into alarmism, of hypothetical dystopias where fact is indistinguishable from fiction. That has distorted the picture. To really understand deepfakes, it is better to look at how they are being used today.


Peter Carlyon is a defense and security analyst at RAND Europe.