New research and book publication! Plus: vote for SXSW member proposals

Hello!

Since the Integrity Institute launched our new platform transparency overview two weeks ago, several events have demonstrated the importance of platform transparency once again.

  • The first studies from Meta’s 2020 election experiments were published, showing some unique insights into political polarization on Facebook and Instagram while offering plenty of questions still worthy of independent research.

  • The company formerly known as Twitter is suing the Center for Countering Digital Hate, which has conducted research into the spread of hateful content on social media platforms.

As we have said consistently, transparency from platforms can be a powerful lever for achieving a number of public interest goals, and efforts to encourage meaningful transparency – including transparency that enables research – deserve our support. In this spirit, in this newsletter we’re bringing you even more research on platforms by Institute members.

Evaluating Meta’s experiments

Institute members have been reading and discussing studies published from Meta’s 2020 election experiments. This week, Institute fellow Tom Cunningham published his take, finding that these experiments were not big enough to test the theory that social media has made a substantial contribution to polarization in the United States. Here’s a summary of Tom’s findings:


Three new experiments show that changing Facebook’s feed ranking algorithm for 1.5 months has an effect on affective polarization of less than 0.03 standard deviations. This is small compared to a growth of 1.1 standard deviations in nationwide affective polarization over the last 40 years.

Small effects in these experiments are consistent with large effects in aggregate. The aggregate contribution of social media to polarization will differ from these experimental estimates in a number of ways: depth, breadth, duration, timing, category, and population. My rough attempts to account for these considerations make me think the aggregate effect is likely 10 or 20 times larger than the effects that would be measured in these experiments, and so small effects in these experiments are consistent with large effects on aggregate.

Put simply: these experiments measure the effect of reducing exposure of an individual user (not their friends and family) to political content on Facebook by 15% for 1.5 months, and occurred in a period after Facebook had already sharply reduced the amount of partisan content circulating. Thus we should expect them to measure only a small fraction of the cumulative impact of social media, and in fact these results are consistent with social media being entirely responsible for the growth of polarization in the US.

Nevertheless other evidence implies that social media has probably not made a huge contribution to US polarization. If we wish to evaluate the balance of evidence relating social media to polarization there are many other sources which are probably more informative than these experiments. I give a rough sketch below and it seems to me social media probably does not account for a majority share, mainly because (1) polarization had been growing for 20 years prior to social media’s introduction, and much of the growth since 2014 was in people without internet access; (2) a lot of partisan discourse continues to spread outside of social media, e.g. through cable TV and talk radio; (3) other countries do not show a similar increase in affective polarization.

Discussion of these results has been distressingly non-quantitative. The majority of discussion of these results (in papers, editorials, on Twitter) has been about whether these changes “have an effect” or “do not have an effect.” Interpreted sympathetically these statements are compressed ways of saying “an effect larger than 0.03 standard deviations.” However I think taking this shortcut so consistently has led to far too little time thinking about what we have learned from these experiments that we didn’t already know, and what is the balance of evidence regarding the effects of social media. I give a lot of examples below.


We are thrilled that Tom will be conducting various original research projects on social platforms and social media (such as this independent evaluation) at the Institute, and we hope you’ll read and follow his work!

What would democratic platforms look like?

One of the main reasons we’re called the Integrity Institute is that we believe in designing social platforms for structural integrity. Besides technical design that would prevent and mitigate many of the harms we see on platforms now, platform governance is another crucial piece of that structural integrity. Therefore, we’re thrilled that Institute founding fellow Paul Gowder’s book The Networked Leviathan: For Democratic Platforms is now out in the world! In Paul’s words:


We all—users, businesses, governments, and the general public—expect internet platform companies, like Meta, Alphabet, and Amazon to govern their users. Without platform governance, we all experience disasters like foreign election interference, vaccine misinformation, counterfeiting, and even genocide.

Unfortunately, the platform companies have failed. To this day, despite the lessons from years of missteps and billions of dollars of spending in enterprises like content moderation, the major internet companies have been unable to prevent their platforms from hosting misinformation, scams, incitement, and hate. Nobody (except the perpetrators) wants this result. The failures of platform companies result not from mailce, but from companies' inability to manage the complexity of their userbases and products and of their own incentives under the eyes of conflicting internal and external constituencies.

The research of scholars in political science and other academic disciplines can help companies and governments progress on the problem of platform governance. Political scientists, constitutional theorists, and other scholars of governance have been studying the efforts of states to govern under complexity for centuries under theoretical rubrics like the problem of knowledge and incentive-compatible institutional design. The Networked Leviathan argues that this hard-won knowledge about states also applies to platforms. The insights from the research in political science and allied disciplines leads inexorably to the conclusion that governments and companies should collaborate to build democratic institutions for platform governance. By permitting ordinary people from across the world to participate in the governance enterprise, we allow those with the knowledge critical to making and applying platform rules to deploy that knowledge where it can make an impact. Democratic governance also allows companies to recruit third parties to help manage their own capacity to make and stick to decisions.

The Networked Leviathan offers a case and a roadmap for democratizing the platforms.


Institute executive director Sahar Massachi said of Paul and The Networked Leviathan, “No one else can weave philosophical theories of governance and virtue, practical technical understanding of platforms, and political science into such a compelling package. Bravo!” You can read The Networked Leviathan in its entirety – for free! – here.

Vote for SXSW member proposals!

Although it’s only August, Institute members are already looking ahead to next year! The public voting process for determining the SXSW 2024 program has already begun and will last until August 20. Below’s a list of proposals involving Institute members – we would love to have your votes!


Thank you for your ongoing support for the Integrity Institute. As always, reach out with what’s on your mind and let’s talk!

Sean Wang, partnerships and collaborations manager

Previous
Previous

New Guide Provides Concrete Elections Integrity Recommendations for Online Platforms

Next
Next

Integrity Institute Releases Overview of Online Social Platform Transparency