Election Deepfakes: What To Do About Political Media That Doesn’t Mean What You Think It Means

In August 2023, the US Federal Election Commission (FEC) announced that it has accepted a petition from Public Citizen requesting rulemaking to address the anticipated onslaught of “deepfakes” in 2024 US campaign advertising, and that it is soliciting public comments regarding this petition.

Integrity Institute members Eric Davis, Diane Chang, Lucia Gamboa, Amari Cowan, Swapneel Mehta, Nichole Sessego, and David Evan Harris submitted comments to the FEC in October 2023. Eric also wrote this blog post (with member contribution) about the importance of this issue. Read on, and also make sure to check out Institute members’ full comments to the FEC here.


Generative Artificial Intelligence (a.k.a., Gen AI) technology offers enormous potential benefits for democratic systems. For example, imagine having a tool to explain opaque ballot initiatives, including how they relate directly to you. Imagine children learning about the legislative process by conversing with a bill

However, (surprise) this post focuses on a politically corrosive use of Gen AI. “Deepfakes” are deceptively authentic audio, video, and images produced using Gen AI technology. The term (deep learning + fake) came into use in 2017 when an eponymously named Reddit account began uploading adult videos manipulated to appear as if celebrities were the central characters. Although the manipulation of media is a longstanding practice, Gen AI is a revolutionary jump forward, enabling the production of multimedia and text content at far greater speed, scale, and sophistication. Consequently, predecessors to deepfakes are now called cheapfakes or shallow fakes.

Gen AI’s capabilities, coupled with the growing accessibility of tools, have spurred the use of deepfakes affecting elections and other political outcomes globally. For example, two days before Slovakia’s national election in September 2023, a damaging audio deepfake of a party leader discussing how to rig the election was circulated on social media. Similar tactics were used to undermine a party leader in the UK. In the 2023 Chicago mayoral race, a candidate’s loss may have been caused by an 11th-hour deepfake depicting him as a supporter of police brutality.

In addition to their direct impact on elections, the threat posed by deepfakes can be exploited by political actors to undermine information integrity and public trust in credible news outlets. This is referred to as the liar’s dividend. The tactic is also served by foreign influence campaign techniques, particularly the firehose of falsehood — muddying the waters until people aren’t sure which information sources to trust. Consequently, skepticism of genuine audio and visual media increases (that’s right, the blog post title applies to this scenario as well). This, in turn, provides politicians and state actors stronger footing to deny evidence of actual events.

To add to the pile: (1) Political deepfakes are threats year-round, not just during elections. (2) Deepfakes are used to perpetuate hoaxes, such as a fake explosion at the Pentagon, which caused the stock market to dip briefly. (3) Since this blog post doesn’t cover Gen AI’s text production capabilities, here’s a troubling proof-of-concept system for managing a disinformation campaign, and here’s a report on the growth of AI-generated clickbait “news' ' site.  

By now, you may be wondering (aside from when I'm going to get to the FEC letter) – how concerned should we be about elections in 2024 and beyond? Is it time to *panic*? The emergence of AI presents great benefits as well as a set of hard, but manageable problems. Any panicking should be conducted responsibly

Like many Trust & Safety issues, the challenges we're facing are multifaceted, without an easy fix. Disinformation, after all, is not caused by AI. We need to take a structural approach, integrating a framework of strategies including policy and regulation, technical measures, cross-sector collaboration, and education and awareness. 

The framework will need to account for both near-term and somewhat longer-term factors. For example, many tech companies are quickly moving to automatically watermark media produced by their AI tools. This is important progress, and should help in upcoming elections. Watermarking isn’t foolproof, but, as with most online or offline protective measures — say, enhancing the security of a bank vault — it adds another barrier that helps narrow the pool of potential threat actors. Watermarks are non-controversial and will complement other multisector efforts. 

However, open source systems may eventually become suitable as accessible, watermark-free alternatives to mainstream Gen AI tools. Consequently, longer-term planning should support the continued development and widespread adoption of systems for establishing media provenance, basically providing ground truth for distinguishing original media from manipulated versions. The Starling Lab is doing interesting work in this space, partnering with Reuters and others and using a variation of blockchain technology (yep, blockchain). In short, images are digitally signed at the point of capture; information is registered onto public ledgers and preserved in cryptographic archives. 

The rest of this blog post walks through regulatory and legislative considerations, beginning with our feedback to the Federal Elections Commission and building on those themes in the section for legislative recommendations. Future posts will discuss other components essential for a robust response to the challenges of AI-powered political skullduggery and related issues. As part of this, we’ll be elaborating on the AI section of our best practices guide for supporting healthy elections across platforms

Our Feedback to the FEC

Our letter endorses Public Citizen’s position that the FEC has the authority to regulate certain uses of election deepfakes. We express why we believe the risks of AI-manipulated and AI-generated deceptive media are a threat to democratic integrity and, as such, warrant exceptional scrutiny for communications relating to elections. We briefly examine and offer suggestions for what policy compliance would look like. Our suggestions were developed to be within the parameters of the FEC’s authority as well as consistent with related guidelines. We discuss implications for platforms, and we share a subset of recommendations for taking a blended approach to addressing the broader issues.

In a nutshell, Public Citizen’s petition contends that the deliberately deceptive use of deepfakes in official election campaign communications violates existing FEC prohibitions against “fraudulent misrepresentation.” In keeping with our read of the FEC’s regulations, the remedy for this is, in practical terms, that deepfakes must be disclosed as such, in context.

We believe the FEC has an essential role in addressing election deepfakes. However, compared with the scope of the overall problem, the FEC’s purview is narrow, limited for the most part to deepfakes in official campaign ads and communications. So far, the primary attack scenarios for election deepfakes consist of surreptitious releases to the public a day or two before election day. Our legislative recommendations take a broader look at the problem.

Legislative Recommendations

As a general rule, we believe that public policy should not get far ahead of emerging technologies or markets, instead taking the long view and acting deliberately over time as needed. However, as the manipulative use of Gen AI is a distinct and profoundly high-risk problem, we’re taking what might best be described as a cautiously proactive approach. 

Scope

California, Minnesota, Texas, Washington, and other U.S. states have passed bills applicable to election deepfakes, and, in some cases, cheapfakes. Additionally, there's draft legislation in various stages in other states and at the federal level. This is forward progress and encouraging, especially when factoring in the bipartisan support for certain efforts. However, the scope of some of the legislation is overly narrow, dependent on secondary factors, such as whether AI was used to produce deceptive media. 

The core problem that legislation should seek to address is the use of materially deceptive audio, video, and images to malignly influence elections and other political outcomes. 

  • How should “material deception” be defined? By using a “reasonable person” standard — this is an established approach, used in a wide variety of court cases, particularly consumer fraud. It’s also used in California’s applicable election code, passed in 2019:

“The image or audio or video recording would falsely appear to a reasonable person to be authentic. (2) The image or audio or video recording would cause a reasonable person to have a fundamentally different understanding or impression of the expressive content of the image or audio or video recording than that person would have if the person were hearing or seeing the unaltered, original version of the image or audio or video recording.” Cal. Elec. Code § 20010

  • The technology or tools (e.g., AI, iMovie), platforms (e.g., TV commercials, robocalls), and actors (e.g., state parties, PACs) involved with making and delivering manipulated media shouldn’t be a factor in determining whether or not the media is violative. Which is to say, a deceptively altered photo is deceptive regardless of whether it was prepared by a campaign staff via Gen AI tools, or expertly photoshopped by Crazy Uncle Bob (“not so crazy now, am I?”). Likewise, a deepfake video shared in a campaign commercial or anonymously posted by foreign influence campaign operatives to social media is still a deepfake video. 

    • Another benefit of taking a technology-agnostic approach rather than narrowing the scope to Gen AI is that it helps “future-proof” the requirements. It’s likely that in the future there will be other technologies that also take giant leaps forward.

  • Most deepfake-related legislation limits regulatory authority to up to 90 days before election day. Oversight should instead be year-round since campaigns (benign and malign) to influence political outcomes are not limited to elections. E.g., a bill moving through Congress, a political appointment to fill a vacated senate seat, or an upcoming school board vote for a contentious topic. 

  • Deepfake creations used for parody, satire, and artistic expression (or some combination thereof) should be out of scope. If a reasonable person, regardless of their awareness of current events, candidates, communication styles or political beliefs of particular candidates, can’t distinguish these types of deepfakes, then the deepfakes should be treated as  violative unless they’re properly labeled.

Disclosures and Warnings

  • What qualifies as clear, conspicuous, and impactful disclosure for deepfakes, especially when the disclosure is competing with the deepfake’s actual content for the user’s attention? We provide suggestions in our feedback to the FEC as a starting point. Rules for meaningful disclosure should remain the subject of research and regular re-evaluation. For example, if just a few spoken words of a video are subtly replaced, a label accompanying just the altered parts might be missed; conversely, a warning label that persists throughout such a video may be disregarded because it’s not obvious what it applies to (dubiously crazy Uncle Bob says it’s just legalese).  

  • Policymakers should be wary of mandates for the broad use of labels and warnings, e.g., labeling all campaign media that uses AI. Such practices quickly lead to diminishing returns, as users are more likely to disregard them as noise. Although the labeling of Generative AI-produced content is better than no labeling of any kind, ideally only malign media would be labeled.

Impersonation

  • Bots do not have free speech rights. We recommend prohibitions against the use of bots to project artificial identities (e.g., in social media posts, chatbots, email correspondence) or impersonate real people without consent for purposes of influencing elections and other political outcomes. California has such legislation on the books; it requires clear disclosure, applies to both commercial transactions and businesses, and applies year-round rather than just election season. 

  • Similarly, computer crime laws in some states, such as Rhode Island, prohibit online impersonation. Such laws could potentially be amended to clarify that bots and non-consensual deepfakes (which would also help address other much-needed protections, such as deepfakes of intimate images) are also in scope. 


In responding to the problem of political deepfakes, we need to be mindful of the rapidly changing landscape as well as the risk of overcorrecting or undercorrecting. Deepfakes are the most pronounced challenge, not the only one. In upcoming blog posts, we’ll continue to explore critical elements of a structural, cross-sector response to AI-powered political manipulation.

Previous
Previous

Digital Disruption: Measuring the Social and Economic Costs of Internet Shutdowns Throttling of Access to Twitter

Next
Next

Reflection from 2023 DEF CON