Rehan Mirza – The Journalist's Resource https://journalistsresource.org Informing the news Fri, 16 Feb 2024 21:56:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-32x32.png Rehan Mirza – The Journalist's Resource https://journalistsresource.org 32 32 How AI deepfakes threaten the 2024 elections https://journalistsresource.org/home/how-ai-deepfakes-threaten-the-2024-elections/ Fri, 16 Feb 2024 18:41:26 +0000 https://journalistsresource.org/?p=77532 We don’t yet know the full impact of artificial intelligence-generated deepfake videos on misinforming the electorate. And it may be the narrative around them -- rather than the deepfakes themselves -- that most undermines election integrity.

The post How AI deepfakes threaten the 2024 elections appeared first on The Journalist's Resource.

]]>

Last month, a robocall impersonating U.S. President Joe Biden went out to New Hampshire voters, advising them not to vote in the state’s presidential primary election.  The voice, generated by artificial intelligence, sounded quite real.

“Save your vote for the November election,” the voice stated, falsely asserting that a vote in the primary would prevent voters from being able to participate in the November general election.  

The robocall incident reflects a growing concern that generative AI will make it cheaper and easier to spread misinformation and run disinformation campaigns. The Federal Communications Commission last week issued a ruling to make AI-generated voices in robocalls illegal.

Deepfakes already have affected other elections around the globe. In recent elections in Slovakia, for example, AI-generated audio recordings circulated on Facebook, impersonating a liberal candidate discussing plans to raise alcohol prices and rig the election. During the February 2023 Nigerian elections, an AI-manipulated audio clip falsely implicated a presidential candidate in plans to manipulate ballots. With elections this year in over 50 countries involving half the globe’s population, there are fears deepfakes could seriously undermine their integrity.  

Media outlets including the BBC and the New York Times sounded the alarm on deepfakes as far back as 2018. However, in past elections, including the 2022 U.S. midterms, the technology did not produce believable fakes and was not accessible enough, in terms of both affordability and ease of use, to be “weaponized for political disinformation.” Instead, those looking to manipulate media narratives relied on simpler and cheaper ways to spread disinformation, including mislabeling or misrepresenting authentic videos, text-based disinformation campaigns, or just plain old lying on air.  

As Henry Ajder, a researcher on AI and synthetic media writes in a 2022 Atlantic piece, “It’s far more effective to use a cruder form of media manipulation, which can be done quickly and by less sophisticated actors, than to release an expensive, hard-to-create deepfake, which actually isn’t going to be as good a quality as you had hoped.” 

As deepfakes continually improve in sophistication and accessibility, they will increasingly contribute to the deluge of informational detritus. They’re already convincing. Last month, The New York Times published an online test inviting readers to look at 10 images and try to identify which were real and which were generated by AI, demonstrating first-hand the difficulty of differentiating between real and AI-generated images. This was supported by multiple academic studies, which found that “faces of white people created by AI systems were perceived as more realistic than genuine photographs,” New York Times reporter Stuart A. Thompson explained.

Listening to the audio clip of the fake robocall that targeted New Hampshire voters, it is difficult to distinguish from Biden’s real voice.  

The jury is still out on how generative AI will impact this year’s elections. In a December blog post on GatesNotes, Microsoft co-founder Bill Gates estimates we are still “18-24 months away from significant levels of AI use by the general population” in high-income countries. In a December post on her website “Anchor Change,” Katie Harbath, former head of elections policy at Facebook, predicts that although AI will be used in elections, it will not be “at the scale yet that everyone imagines.” 

Beware the “Liar’s Dividend”

It may, therefore, not be deepfakes themselves, but the narrative around them that undermines election integrity. AI and deepfakes will be firmly in the public consciousness as we go to the polls this year, with their increased prevalence supercharged by outsized media coverage on the topic. In her blog post, Harbath adds that it’s “the narrative of what havoc AI could have that will have the bigger impact.” 

Those engaging in media manipulation can exploit the public perception that ‘deepfakes are everywhere’ to undermine trust in information. These people use false claims and discredit true ones by exploiting the “liar’s dividend.”  

The “liar’s dividend,” a term coined by legal scholars Robert Chesney and Danielle Keats Citron in a 2018 California Review article, suggests that “as the public becomes more aware about the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic audio and video as deepfakes.” 

Fundamentally, it captures the spirit of political strategist Steve Bannon’s strategy to “flood the zone with shit,” as he stated in a 2018 meeting with journalist Michael Lewis.

As journalist Sean Illing comments in a 2020 Vox article, this tactic is part of a broader strategy to create “widespread cynicism about the truth and the institutions charged with unearthing it,” and, in doing so, erode “the very foundation of liberal democracy.”

There are already notable examples of the liar’s dividend in political contexts. In recent elections in Turkey, a video tape surfaced showing compromising images of a candidate. In response, the candidate claimed the video was a deepfake when it was, in fact, real.

In April 2023, an Indian politician claimed that audio recordings of him criticizing members of his party were AI-generated. But a forensic analysis suggested at least one of the recordings was authentic.  

Kaylyn Jackson Schiff, Daniel Schiff, and Natalia Buen, researchers who study the impacts of AI on politics, carry out experiments to understand the impacts of the liar’s dividend on audiences. In an article forthcoming in the American Political Science Review, they note that in refuting authentic media as fake, bad actors will either blame their political opposition or “an uncertain information environment.”

Their findings suggest that the liar’s dividend becomes more powerful as people become more familiar with deepfakes. In turn, media consumers will be primed to dismiss legitimate campaign messaging. It is therefore imperative for the public to be confident that we can differentiate between real and manipulated media. 

Journalists have a crucial role to play in responsible reporting on AI. Widespread news coverage of the Biden robocalls and recent Taylor Swift deepfakes demonstrate that distorted media can be debunked, due to the resources of governments, technology professionals, journalists, and, in the case of Swift, an army of superfans.

This reporting should be balanced with a healthy dose of skepticism on the impact of AI in this year’s elections. Self-interested technology vendors will be prone to overstate its impact. AI may be a stalking horse for broader dis- and misinformation campaigns exploiting worsening integrity issues on these platforms. 

How lawmakers are trying to combat the problem

Lawmakers across states have introduced legislation to combat election-related AI-generated dis- and misinformation. These bills would require disclosure of the use of AI for election-related content in Alaska, Florida, Colorado, Hawaii, South Dakota, Massachusetts, Oklahoma, Nebraska, Indiana, Idaho and Wyoming. Most of the bills would require that information to be disclosed within specific time frames before elections. A bill in Nebraska would ban all deepfakes within 60 days of an election.

However, the introduction of these bills does not necessarily mean they will become law. Furthermore, their enforceability could be challenged on the grounds of free speech, based on positioning AI-generated content as satire. Moreover, penalties would only occur after the fact or be evaded by foreign entities.  

Social media companies hold the most influence in limiting the spread of false content, being able to detect and remove it from their platforms. However, the policies of major platforms, including Facebook, YouTube, and TikTok state they will only remove manipulated content for cases of “egregious harm” or if it aims to mislead people about voting processes. This is in line with a general relaxation in moderation standards, including repeals of 17 policies at the former three companies related to hate speech, harassment and misinformation in the last year. 

Their primary response to AI-generated content will be to label it as ‘AI-generated.’ For Facebook, YouTube and TikTok, this will apply to all AI-generated content, whereas for X (formally Twitter), these labels will apply to content identified as “misleading media,” as noted in recent policy updates.

This puts the onus on users to recognize these labels, which are not yet rolled out and will take time to adjust to. Furthermore, AI-generated content may evade the detection of already overstretched moderation teams and not be removed or labeled, creating false security for users. Moreover, with the exception of X (formerly Twitter)’s policy these labels do not specify whether a piece of content is harmful, only that it is AI-generated.

A deepfake made purely for comedic purposes would be labeled, but a manually altered video spreading disinformation might not. Recent recommendations from the oversight board of Meta, the company formerly known as Facebook, advise that “instead of focusing on how a distorted image, video or audio clip was created, the company’s policy should focus on the harm manipulated posts can cause.”  

The continued emergence of deepfakes is worrying, but they represent a new weapon in the arsenal of disinformation tactics deployed by bad actors rather than a new frontier. The strategies to mitigate the damage they cause are the same as before – developing and enforcing responsible platform design and moderation, underpinned by legal mandates where feasible, coupled with journalists and civic society holding the platforms accountable. These strategies are now more important than ever. 

The post How AI deepfakes threaten the 2024 elections appeared first on The Journalist's Resource.

]]>