Misinformation Amplification Analysis in the US Midterm Elections

By Jeff Allen, Integrity Institute Chief Research Officer and Co-Founder


For the US midterm elections, the Integrity Institute has been tracking how misinformation is amplified on large platforms. To do this, we have been tracking the “Misinformation Amplification Factor”, which is how much engagement misinformation posts get relative to the expected engagement based on the average engagement of the user that posted the misinfo. In short, it tracks how much additional engagement you should expect to get on the platforms for posting misinformation. For more information on our methodology, please read our full analysis.

In this quick update, we look specifically at how election related misinformation performs on the platforms. As we head into the midterms, fact checkers have been investigating more posts with voting and election misinfo. And we have enough data now to give a rough measure of the MAF for election and voting related misinfo on Facebook, Instagram, and Twitter. By voting and elections misinfo, we specifically mean false claims around the mechanics of voting and the overall integrity of elections. This includes specific false claims of election fraud as well as broad and general false claims that elections are invalid. This does not include, for example, false claims from candidates about their policies or the policies of their opponents.
For Facebook and Instagram, we do not see any significant difference between the overall MAF and the elections and voting MAF. But for Twitter, we see a significantly smaller MAF for elections and voting misinfo.
  • Facebook Elections MAF: 3.0 (1.6 - 5.6, 90% confidence interval)
    • Compared to 4.2 for overall
  • Instagram Elections MAF: 2.0 (1.3 - 3.0, 90% confidence interval)
    • Compared to 2.6 for overall
  • Twitter Elections MAF: 6.6 (3.0 - 14, 90% confidence interval)
    • Compared to 32 for overall
The elections MAF is above one for all platforms, which means that every platform is creating an incentive to post misinformation about voting and the integrity of our elections. Even taking into account all the work platforms do to prevent elections and voting misinformation, if you post voting misinfo, you should expect to be rewarded by the platforms in the form of getting more engagement than you usually get.
The fact that the MAF is comparable between elections misinfo and other topics for Facebook and Instagram means that you should expect the same boost to engagement for posting elections and voting related misinfo as what you’d get for any misinfo topic.
For Twitter, however, you should expect a substantially smaller engagement boost for elections and voting misinfo. Which is interesting and a few different factors could be contributing to it
  1. Some of the most engaged elections misinfo in our data set came from accounts that have been deleted from Twitter. This is probably for the best, but it means we can’t pull in historical engagement data for them, and can’t compute a MAF. So we are missing some of the most engaging elections misinfo on Twitter, and likely leads to an underestimate of the MAF.
  2. Twitter has safeguards in place for elections and voting misinformation which prevents it from going viral to the same degree as other types of misinfo. We don’t know what systems Twitter has in place for election and voting related Tweets, so we can’t know for sure. However, if Twitter followed our transparency recommendations around harmful content and algorithm design, then we would!
  3. Election misinfo just isn’t catching enough interest to go extremely viral right now. Qualitatively speaking, the examples of election misinformation Tweets in our dataset lack the kind of conspiratorial specificity that we see in the Facebook and Instagram posts. However, we should expect this to change as we get closer and closer to the election.
We will continue to monitor the MAF and election MAF through the US midterm elections.

Update on Overall MAF for Platforms

We have been monitoring the MAF for large platforms for two months now. We have not seen any significant movement in the weekly MAF for any platform. Which means that platforms likely haven’t implemented any “break the glass” measures that had a significant impact, and that there haven’t been any misinformation narratives that have taken hold in a significant way. We will see if both of these remain true through the election.
 
 

Misinformation Amplification Is Preventable

One big reason why misinformation is amplified on social media is because the platforms are designed to maximize engagement and show users “what they want to see”, rather than optimize for a healthier information ecosystem. But there are alternatives to engagement based ranking and engagement focused design. Our "socially aware PageRank" uses Google's foundational PageRank algorithm to compute PageRank scores for social media accounts, and is described in our original MAF analysis. PageRank continues to do a fairly good job of separating the misinformation posts from the sources of ground truth that fact checkers turn to for accurate information.
 
 
In our previous version of this chart, we did not see any misinformation posts with a PageRank score above 1.25. However, with more recent data, we see a trickle of misinformation with high PageRank scores. These are mostly due to the posts of political candidates! As we approach the elections, fact checkers have caught political figures posting misinformation to their social media accounts, which primarily takes the form of exaggerated false claims about their accomplishments or the policies of their opponents.
PageRank isn’t a panacea for misinfo, and certainly misinfo from public figures, but still works to help tilt the scales in favor of accurate information.
Jeff Allen

Jeff Allen is the co-founder and chief research officer of the Integrity Institute. He was a data scientist at Facebook from 2016 to 2019. While at Facebook, he worked on tackling systemic issues in the public content ecosystems of Facebook and Instagram, developing strategies to ensure that the incentive structure that the platforms created for publishers was in alignment with Facebooks company mission statement.

Previous
Previous

Preventing and Reacting to Burnout: A Guide for Integrity, Trust & Safety Workers and Managers

Next
Next

Misinformation Amplification Analysis and Tracking Dashboard