Global Elections Playbook: AI Edition

By Alexis C. Crews, Integrity Institute Resident Fellow

This piece also appears on Medium.com.


Over the past few weeks, I’ve engaged in numerous discussions with my peers in the tech industry, friends in the political and policy realm, and former colleagues in the foreign policy sector. A recurring theme emerged: understanding how AI would influence global elections during a year when nearly 50% of the world is participating in the election process.

Many are inquiring about the impact of AI on the global election cycle and the ways in which companies should approach addressing the risks and opportunities it presents. Although it is acknowledged that generative AI will not single-handedly determine election outcomes, it remains a tool that can be readily accessed and exploited by ‘bad actors’ globally.

Generative AI is capable of producing content that may be used to disseminate misinformation and amplify disinformation campaigns on a global scale. Given the absence of an industry-wide mechanism for eradicating false information across all platforms, the responsibility falls upon the companies generating the content to establish necessary safeguards. This involves a range of measures, from policy formulation to email verification, geolocation tagging, and monitoring of slurs, dog whistles, and any content related to political parties, candidates, elected officials, and official voting information. Below, I outline some thoughts on structuring teams and prioritizing tasks for the upcoming election cycle.

How I would structure a team and build a framework for election integrity:

  • Global Elections Lead

  • Operations SME (Subject Matter Expert)

  • Data SME

  • Policy SME

  • Language SME

  • Legal SME

  • Partnerships SME

  • Communications SME

  • Crisis Response SME

While subject matter experts are crucial leaders, success ultimately hinges on proper team staffing. A truly global response program requires sufficient personnel to maintain a comprehensive on-call schedule; this is the only way to ensure effectiveness.

Framework:

1. Partnerships

  • In the US, collaborate with every state Secretary of State (SOS) office for fundamental training, with each state receiving a budget for election readiness. Focus on states that have already expressed concerns about election integrity. Partner with the Bipartisan Policy Center (BPC) to conduct red teaming exercises and educate on software usage. Similarly, since every country has an election commission, it’s vital to forge relationships with each to guarantee information accuracy and assist them in leveraging the platform effectively.

  • Engage with national and international organizations representing minority groups to ensure that slurs and specific phrases are incorporated into the system and that classifiers are in use. Such phrases should trigger flags in the system and prevent the generation of responses. This should include variations in spelling, acronyms, and translations.

  • Work with global watchdog organizations that monitor misinformation trends and can report relevant trends and language to OpenAI, aiding in the update of classifiers that require monitoring. This is particularly important in countries governed by authoritarian regimes.

2. Operations/Policies

  • Utilize geo-targeting to pinpoint the locations of users generating campaign or election-related content using OpenAI products. This is just one of many steps to identify foreign interference by known actors worldwide.

  • For ballot and procedural inquiries, provide language that directs users to the official SOS website or CanIVote.com as an alternative, noting that state websites are often more current than national ones.

  • Similarly, for each country, provide language directing users to the official election commission or authority site. In authoritarian states, guide users to watchdog organizations that can offer accurate information regarding official election procedures.

  • For politically charged or subjective questions about candidates, offer a standard response such as:

“I am a language model. Please consult a search engine for more detailed information on specific political issues.”

  • For inquiries related to current elected officials, provide relevant information and include a disclaimer about political views, urging users to conduct their own research:

“Here’s information related to [official], the current holder of [position]. For details on their political views or campaign, please consult a search engine for more information.”

  • For requests involving the creation of election-related materials (ads, fundraising emails, campaign slogans, etc.), restrict access to verified campaign or elected office accounts. Other users should receive a message stating:

“Certain features are accessible only to verified account holders. Please log in with your verified account.”

3. Create internal policy exceptions tailored to each market, acknowledging specific trends, political organizations, slurs, etc.

4. Flag any content utilizing the names of candidates or elected officials for creating content, including images, that could be construed as deep fakes or misinformation.


The manner in which generative AI is utilized during this global election cycle will shape the nature and extent of regulations imposed by countries in the future. While the focus remains on traditional social media platforms, earnest discussions about the role of generative AI in society and in this election cycle will dictate the level of governmental intervention going forward.

Previous
Previous

Integrity Institute Input on UN Code of Conduct on Information Integrity

Next
Next

Child Safety Online