Mid-Year Verdict: The State of Global Digital Election Integrity in 2024
By Manish Kumar, Integrity Institute Research Intern and graduate candidate at the University of California, Berkeley
Contributors to this article include Kay Spencer, an expert in peace operations and current Elections Program Director at the National Democratic Institute (NDI), and members of the II Elections Best Practices Working Group.
Global Election Overview
The 2024 elections worldwide have been marked by a diverse array of outcomes, political shifts, and notable challenges to election integrity. The first half of the year alone has seen elections in approximately 30 countries including Taiwan, Pakistan, Indonesia, Bangladesh, Mexico, Russia, UK, India and the European Union, representing a significant portion of the world's population. Overall, the results of these elections indicate a global trend towards center-right and right-wing politics, reflecting a broader shift in the political landscape. We also observed an upsurge in technology usage, including AI, shaping various aspects of the electoral process. This article aims to reflect on the key trends observed, assess the readiness and responses from various platforms, and draw critical lessons for the remaining six months of this significant election year.
Abusive Trends and Examples Compromising Election Integrity
In this section, we delve into the trends that have been observed in recent elections worldwide, including the proliferation of misinformation, the emergence of deep fakes, the spread of hate speech, instances of election interference, including various tactics of coordinated inauthentic behavior. These specific examples can give an overview of the types of elections-related challenges facing platforms in the current context.
Misinformation/Disinformation: The scale of election-related misinformation detected so far in 2024 is staggering. NewsGuard’s global team of misinformation analysts has identified 963 websites, 793 social media accounts, and video channels associated with spreading false information. Similarly, the EU vs Disinfo initiative has documented 17,133 cases of disinformation. These figures underscore the pervasive nature of misinformation campaigns, but there is no way to determine the actual number of misinformation instances, as many cases likely go undetected. A spam campaign traced back to an actor associated with the Chinese Communist Party aimed to discredit Taiwan’s president. Misinformation and conspiracy theories in the wake of the Trump assassination attempt reveal the role that misinformation plays in filling the void of unanswered questions after major events. These examples illustrate the sophisticated and varied tactics employed to manipulate public perception and interfere with electoral processes, and the speed at which these narratives and content spring up and spread. There was also a recent discussion on how satire can spread misinformation online and whether it should be labeled, adding another layer of complexity to misinformation management, particularly in political contexts.
Hate Speech and Threat of Violence: Hate speech has further complicated election integrity. In India, Prime Minister Modi’s election campaign faced allegations of promoting hate speech against Muslims, which was amplified on social media platforms like Meta and Instagram. Meta also approved a series of AI-manipulated ads in India that allegedly incited violence. In the U.S., election officials have experienced a surge in physical threats and harassment, driven by online false narratives about election integrity. These threats include doxxing and swatting attacks, significantly compromising the safety and mental well-being of election officials.
Deepfakes: The proliferation of deepfakes presents another significant threat. In the UK, over 100 deepfake video advertisements impersonating Rishi Sunak were promoted on Meta in a single month. In Bangladesh, deepfake videos showed candidates withdrawing from the elections on election day. While we haven’t seen an “11th hour deepfake” that changes the course of an election, deepfake videos and photos have become part of the stream of misinformation content.
Government Content Takedowns: Government requests to take down content have also raised concerns about the availability of information during election periods. In India, X was asked to remove tweets condemning the government, to which it chose to comply. Such actions pose a challenge to free speech and democratic processes, underscoring the delicate balance platforms must maintain between complying with local laws and upholding democratic principles.
Coordinated Inauthentic Behavior (CIB): Countries like China, Russia, and Iran continue their efforts to influence election outcomes through spreading misinformation on social media platforms. Chinese networks have targeted the 2024 US election, while Russia focuses on creating fictitious media brands to influence political narratives. Despite Meta's efforts to disrupt these networks, many accounts remain active. The Tech Transparency Project (TTP) reported that Facebook hosts a black market for fake and stolen accounts, some of which are authorized to run political ads in India, raising concerns about potential election interference. Additionally, Chad's election was notably influenced by pro-Russian networks attempting to manipulate online narratives.
Declining Civil Society Access to Data and Electoral Integrity: The declining access to data for civil society groups is a growing concern for electoral integrity. In Georgia, civil society organizations are struggling to monitor public groups on Meta due to the phasing out of CrowdTangle. This tool has been crucial for tracking misinformation and coordinating responses. Without reliable access to such data, monitoring becomes time-consuming and runs the risk of missing critical information. Similarly, in South Africa, researchers find it increasingly difficult to monitor hate speech and disinformation without cooperation from major tech companies. The reduction in accessible data hampers the ability of civil society to hold platforms accountable and ensure transparent elections, highlighting the need for continued access to essential monitoring tools and data.
Regional Trends
Challenges vary significantly by region. In some areas, lack of access to robust data monitoring tools hampers the ability to track and counteract misinformation and disinformation effectively. For instance, regions with stringent government controls over media and censorship, such as Russia, Turkey, Egypt, Venezuela, and Hungary, where the state heavily controls information and stifles opposing voices, face greater difficulties in maintaining election integrity. Civil society stands as the last line of defense against a tidal wave of disinformation, yet in countries with limited resources, these vital groups are increasingly under siege, struggling to monitor and respond to these threats effectively.
The effectiveness and enforcement of platform policies also vary. Analyzing election trends through the DSA transparency report and associated transparency database analysis presents significant challenges due to the lack of standardization across companies and regions. The terminology used, the actions reported, and the time-frames covered vary widely, making it difficult to draw consistent trends or comparisons. For instance, while some companies might report on actions taken against election-related misinformation, others may not include this data at all or use different criteria to classify such actions. This inconsistency hampers the ability to develop a cohesive understanding of how effectively misinformation is being addressed across different platforms during election periods.
Tech Companies’ Election Integrity Efforts
Tech companies claim they are taking significant measures to ensure election integrity for the 2024 global elections. According to the DSA Transparency Database, a total of around 12 billion “statements of reasons” were submitted by 25 platforms, with Google Shopping contributing the most with approximately 2 billion statements. The most common violation type was related to the Scope of Platform Service, accounting for close to 2 billion instances, followed by Harmful Speech, which constituted 46 million cases. OpenAI states that it is focusing on preventing AI system abuse, ensuring transparency of AI-generated content, and providing accurate voting information, while Anthropic is conducting extensive Policy Vulnerability Testing with external experts to identify and mitigate risks, updating their models and policies accordingly, and developing automated evaluations for election-related scenarios.
Microsoft, Google, Meta, and TikTok are expanding their efforts to combat misinformation, provide authoritative voting information, and secure their platforms. While these measures are a step in the right direction, the question remains: will they be enough to ensure election integrity? Despite their multi-pronged approach, are tech companies employing proportionate resources to address the scale and complexity of election-related challenges? Notably, civil society groups have tested platform ad policies by submitting ads with deliberate misinformation and getting them approved, as seen in tests with TikTok and YouTube. The effectiveness of these efforts hinges on continuous improvement, rigorous enforcement, and substantial collaboration with both governmental and non-governmental organizations. As the election year progresses, it will be crucial to monitor and evaluate the impact of these initiatives in real-time, ensuring they adapt to emerging threats and effectively safeguard the democratic process.
Recommendations
As we look ahead to the upcoming elections in 2024, including those in the US, Brazil, and Georgia, it is clear that abusive trends such as misinformation, deep fakes, hate speech, and coordinated inauthentic behavior will continue to pose significant challenges to election integrity. Recognizing these threats, the Integrity Institute has put forward a set of recommendations aimed at mitigating these issues and safeguarding the democratic process. As we approach critical elections, the call to action is clear: tech companies and governments must work together to ensure the fairness and transparency of elections worldwide. These recommendations emphasize the need for a coordinated and robust response to protect our democratic processes.
Tech Companies
Strengthening Media Integrity and Information Quality:
Develop policies and systems to address manipulated media and promote accurate, high quality and authoritative information. Instead of focusing solely on detecting and mitigating misinformation, deep fakes, and coordinated inauthentic behavior in real-time (an approach that can be impractical at scale and cost-prohibitive) tech companies should prioritize the following:
Promoting Accurate Information: Invest in and enhance systems that identify and elevate high-quality, accurate content. This can help counteract the spread of misinformation by providing users with reliable information. This can include defining quality standards for content, and integrating these into algorithmic ranking and recommendations to ensure that quality is prioritized over other factors like engagement that tend to surface lower quality content.
Comprehensive Policies for Manipulated Media: Establish and enforce policies to manage and respond to manipulated digital content that align with local laws and social compliance standards. This should include enforcing policies that require clear and transparent labeling when media has been altered.
Scale fact-checking initiatives to verify content and provide accurate information to users. Although fact-checking has proven effective in combating misinformation, it remains underutilized and unscaled. Currently, fact-checking capabilities are available in only a few languages, and the process of identifying and verifying content is slow and has limited impact. Expanding these initiatives to cover more languages and improving the speed and efficiency of fact-checking is essential to enhancing its effectiveness.
Collaboration with Civil Society:
Ensure that civil society groups have timely, granular and analyzable access to data and affordable monitoring tools to track and respond to misinformation and disinformation effectively.
Fund and support research initiatives and initiatives aimed at understanding and combating election-related abuses on digital platforms.
Public Awareness and Education:
Create election integrity pages for elections worldwide, providing users with reliable information and resources to counter voter suppression.
Launch comprehensive educational campaigns to inform users about the risks of misinformation, deep fakes, and hate speech, thus promoting digital literacy.
Government Bodies
Create comprehensive regulations to address emerging threats from digitally-created content, especially but not limited to deep fakes, used to foster disillusion in a harmful way or with harmful intent. Include clear guidelines for tech companies. The positive effects of regulations like the DSA are becoming evident, particularly in holding platforms accountable and ensuring election integrity. Regulations that are focused on accountability mechanisms like transparency, risk assessment and design-based approaches offer more flexibility and are more future-proof than trying to regulate specific types of content.
Create a standardized reporting framework for tech companies to be transparent about their efforts to curb election-related misinformation and abuses. This framework could include minimum requirements and be established through a Memorandum of Understanding (MoU) between the government and tech companies. This collaborative approach ensures transparency and accountability while accommodating the varying capacities of different businesses. Any framework should go beyond platforms reporting on the number of misinformation or policy-violating posts they removed. Specifically, frameworks should require meaningful transparency that covers:
The scale of exposure to policy-violating content related to elections on the platform. This puts raw numbers of content removed into context to understand how many people are exposed to that content, and the size of that group compared to overall activity on the platform.
The causes of exposures to that content. This provides insight into the role that platform algorithms may be playing in promoting policy-violating content. Are users exposed to such content on recommendation surfaces?
The nature of exposures to that content. What is the distribution of exposures to policy-violating content among users? Does a smaller group of users see high volumes of this content, or is the content more widespread among all users?
Provide funding and resources to civil society organizations and researchers working on election integrity, ensuring they can effectively monitor and help authorities enforce and respond to threats according to local and international laws.
Questions? You can reach our team at hello@integrityinstitute.org.