Misinformation might sway elections — but not in the way that you think Research suggests it’s hard to change people’s political opinions, but easier to nudge their behaviour
Rampant deepfakes and false news are often blamed for swaying votes
In January, some US voters in New Hampshire received automated phone messages in which President Joe Biden’s voice urged them not to vote in the state’s Democratic Party primary election. It wasn’t actually Biden, however: the message had been generated by artificial intelligence (AI).
The rise of such AI fakery has made more people than ever concerned about the effect of misinformation on elections, particularly because 2024 is a standout year for democracy. More than 50 countries will have held elections by the end of December, including high-profile polls in the United States, United Kingdom, India, South Africa and Mexico, totalling more than 2 billion potential voters. The broad umbrella of misinformation — encompassing all false information, whether spread unknowingly or with intent to deceive — could have devastating impacts.
The World Economic Forum has ranked misinformation as its top global risk over the next two years, above extreme weather events and even war. Policymakers around the world have drafted and passed laws and measures in an attempt to combat the growing threat.
Although the problem is undoubtedly real, the true impact of misinformation in elections is less clear. Some researchers say the claimed risks to democracy posed by misinformation are overblown. “I think there’s a lot of moral panic, if you will, about misinformation,” says Erik Nisbet, a communications and policy researcher at Northwestern University in Evanston, Illinois. A body of research suggests that it is notoriously difficult to persuade people to change their vote, for example. It’s also far from clear how any one message — true or false — can penetrate amid the media chaos.
Still, as others point out, misinformation does not have to change minds about politics to have an impact. It can, for example, mislead people about when and where to vote, or even whether they should do so at all. Furthermore, just knowing that misinformation is out there — and believing it is influential — is enough for many people to lose faith and trust in robust systems, from science and health care to fair elections.
And even if misinformation affects only small numbers of people, if it drives them to action, then that too can have an amplified impact. “We might not expect widespread effects across the whole population, but it might have some radicalizing effects on tiny groups of people who can do a lot of harm,” says Gregory Eady, a political scientist at the University of Copenhagen, who studies the effects of social media.
Steadfast opinions
The World Economic Forum justified placing misinformation as the planet’s most urgent priority because, according to its Global Risks Report for 2024, it could “radically disrupt electoral processes in several economies” and “trigger civil unrest and possibly confrontation”.
Historians can point to many examples of both. In ancient Rome, Octavian (adopted son and heir of the murdered Julius Caesar) launched a smear campaign that falsely portrayed his rival Marc Antony as a traitor, as part of a successful bid to become the first emperor of Rome. More recently, misinformation has been blamed for a swathe of social and political trends — from people’s reluctance to get vaccinated against COVID-19 and rising discrimination against migrants, to the Brexit vote for the United Kingdom to leave the European Union and scepticism about the seriousness of climate change.
The problem, researchers say, is that it’s very hard to prove cause and effect: to determine that any given piece of misinformation made a material difference to how people behaved. “It’s generally a very difficult kind of question, to get at what the effects of misinformation are in the real world,” says Eady.
Difficult, but not impossible. Last year, Eady and his colleagues published the results of one such empirical analysis1, which considered a high-profile and controversial question: to what extent did misinformation spread on the social-media platform Twitter (as it was then known) by Russian sources influence the 2016 US election?
There’s little doubt that Russian social-media accounts impersonated US users in a way that was intended to polarize the US electorate and build support for the Republican presidential candidate, Donald Trump. Eady and his colleagues showed that those trolls reached potentially millions of people. But the analysis also showed that the bulk of the misinformation was probably seen by only a small proportion of them — and by people who already self-identified as Republican.
Although it might seem that the world is drowning in misinformation, it’s only a drop in the ocean compared with the tsunami of other news that people see and hear every day, Eady says. The Twitter users were exposed to hundreds of times more posts from domestic news sources and politicians, Eady found, especially as the election drew nearer.
Sacha Altay, an experimental psychologist at the University of Zurich, Switzerland, says people tend to decide who to vote for on the basis of gut instinct, values and beliefs, rather than on information — whether that information is true or not. Particularly in the US two-party system, people tend to identify strongly with the values of one party or the other, says Altay. “We should start from the premise that it’s very unlikely that any type of information will change people’s decision,” he says.
Swayed behaviour
A more effective form of political persuasion, researchers and strategists say, is to focus not on getting people to change their minds, but rather on getting them to act, or not, on their existing beliefs.
Misinformation about politics and public health can have this kind of effect, says Kate Starbird, a computer scientist who co-founded the University of Washington’s Center for an Informed Public in Seattle. This is especially the case if the misinformation is picked up and amplified by public figures, even if they are prominent only in small communities. “Those communities can have impact on politics at scale,” she says.
For example, misinformation about the validity of the 2020 US presidential election was amplified and spread by a subset of Trump supporters to trigger the attack on the US Capitol building on 6 January 2021. One recent study2 concluded that, in a sample of nearly 665,000 US registered voters on X (formerly Twitter), just over 2,100 people accounted for 80% of the fake news shared about this election. Starbird adds that an increasing distrust of measles vaccinations in Florida in recent years, which has led to a spike in cases, has been fuelled by small groups taking up that cause and spreading false facts.
The same leverage on people’s behaviour could apply to voting. Although it might be hard to convince people to switch allegiances, it could be easier to persuade them that they don’t need to bother to vote at all, for example. Researchers say that misinformation about the electoral process is on the rise. “We see that in more and more elections,” says Max Grömping, a political scientist who studies election disinformation at Griffith University in Brisbane, Australia. “Basically, messages saying, ‘Oh, you know, the election is postponed, it’s next week, you don’t have to show up.’”
An example was seen last year in the lead up to an Australian referendum on changing the constitution to establish a formal Indigenous representative body in parliament. At the time, misinformation circulated online stating that the referendum was not compulsory, seemingly to discourage voter turnout. In fact, all voting in Australia is compulsory. In this case, turnout did not seem to be affected; it was slightly higher than in the 2022 national election.
The deepfaked Biden robocalls in January were traced to a political consultant who said he was trying to draw attention to the potential harm that such misinformation could cause. The Federal Communications Commission is now attempting to levy a US$6-million fine against the consultant. The spoof calls, made on 21 January, became high-profile news on 22 January, before the 23 January vote.
Seeing is believing
The Biden case in particular showcases people’s concerns that AI might fuel these kinds of deception, with deepfakes making misinformation seem more realistic. Researcher Hany Farid at the University of California, Berkeley, is keeping track of deepfake cases in the run-up to the US election. Examples on his website include an AI image of Biden in military fatigues, seemingly on the verge of authorizing military strikes, and one of Donald Trump purportedly meeting Black voters.
The recent election in India was plagued by deepfakes, from deceased political figures making speeches to Bollywood actors giving fake endorsements to political parties. And in Slovakia, fake audio of the parliamentary candidate Michal Šimečka talking about buying votes and plans to raise the price of beer was released on the eve of the September 2023 election.
AI is likely to increase the volume of misinformation because it lowers the level of technical skill needed to create credible content, Nisbet says. “But much like the other research on misinformation, it will be difficult to make any actual causal claims unless it’s studied very carefully,” he says.
Some studies have looked at whether the type of media (text versus imagery or video, for example) affects a message’s persuasiveness. Their results have been mixed. Some do suggest that images can be more persuasive: in one study of health messaging about alcohol and cancer, for example, women were more likely to say they would drink less if they saw social-media posts that added images to narrative text3.
But research by a team at the Massachusetts Institute of Technology (MIT) in Cambridge in 2021 suggests that the persuasive effects of imagery might be minimal. In the study4, the researchers showed people either video clips or transcripts from political advertisements and COVID-19 messaging. They found that people were more likely to believe that an event really occurred when they saw pictures rather than just reading about it. But there was little difference in measures of persuasiveness, such as whether their attitude was changed by the information or whether they were inclined to share it.
“Just because video might be more believable than text doesn’t mean that it is noticeably better at changing people’s minds,” says Adam Berinsky, a co-author of the study and a political scientist at MIT.
Sowing distrust
As fears about misinformation have risen, policymakers have been scrambling to keep up. In recent years, many countries have passed laws and regulations that they claim tackle misinformation, but which have raised concerns about free speech.
As of this January, for example, people in the United Kingdom who release information they know to be false, with the intent to cause “non-trivial” harm, can be punished with fines or several months in jail. Ministers said the move was explicitly intended to clamp down on “dangerous disinformation and election interference online”.
Several bills have been proposed to limit misinformation in the United States. Some would outlaw deceptive uses of AI to portray something that didn’t happen or wasn’t said, whereas others call for better labelling of AI-generated content. Whether such labels actually influence people’s assessments of content or their take-home messages from exposure to altered media is also unclear5.
Many researchers warn that, ominously, reports discussing the rising tide of misinformation (perhaps including this one) might have the same effect as the misinformation itself. “A lot of disinformation campaigns are aimed at sowing distrust,” says Altay. “When we tell people that disinformation works and that misinformation is everywhere, we are also sowing doubt and reducing trust in reliable sources.”
Faith in institutions from politics to science and health care has certainly decreased in many countries. Earlier this year, the latest annual survey by global communications firm Edelman found that British people’s trust in the UK government had fallen to 30% — its lowest value since 2012. And in 2023, the Pew Research Center in Washington DC found that 57% of Americans said science has had a mostly positive effect on society. This is a drop of 8 percentage points since November 2021 and of 16 points since before the start of the COVID-19 outbreak.
“Misinformation is a serious problem that we need to address,” says Nisbet, “but we don’t want to communicate about it in a way that actually makes things worse.”
doi: https://doi.org/10.1038/d41586-024-01696-z
Additional reporting by Dyani Lewis.
This story originally appeared on: Nature - Author:David Adam