Let Section 230 Stay

This piece, written by Integrity Institute fellow Karan Lala, first appeared on The Information and represents the author’s individual opinion. The Integrity Institute supports neither party in the case of Gonzalez v. Google.

Illustration by Shane Burke as appeared on The Information

Gonzalez v. Google, which the Supreme Court will hear this month, is the culmination of years of litigation. The action—a consolidation of lawsuits filed against Google, Twitter and Facebook—attempts to hold these platforms liable for their automated recommendation of content to users.

Social media platforms are publishers, not originators of content, and as such have long been considered immune from liability according to Section 230 of the Communications Decency Act. This case calls not just that immunity into question, but also the entire economy of the internet as we know it. Re-interpreting Section 230 to remove immunity for algorithmic recommendations would make it nearly impossible for social platforms as we know them to function. That’s bad for social media platforms, bad for social media users, and bad for speech in general.

Algorithmic ranking often comes off as an opaque boogeyman. The Netflix documentary The Social Dilemma, for instance, portrays the ranking algorithm as an omniscient council in a futuristic chamber, turning emotional dials and toying with users’ deepest vulnerabilities to keep them hooked.

This image is misleading, unhelpful, and irrationally fear-inducing. In reality, ranking and recommendations are key to identifying what content is most relevant and interesting for each one of us. We want Netflix to identify shows we might like. We want dating apps to surface people with whom we have common interests. We want Instagram to surface posts from the friends we care most about.

Recommendations systems can range from the trivial to the extremely complicated, but in the interest of clarity, we will stipulate that at the most basic level they operate on the same general principles:

  1. A team decides on a set of metrics they want to optimize for such as time spent on the app, probability that a user will interact with a piece of content, etc.

  2. The team then develops a model to predict how a given piece of content will perform for each user based on that set of metrics.

  3. The team applies the model to determine which posts should appear to which users, in what order they should appear, etc.

  4. The team collects data on the model’s performance and uses it to refine future editions of the model.

  5. Repeat. Repeat. Repeat.

Most people don’t have a problem with recommendation systems as such. What they take issue with are the specific metrics companies select and how the companies prioritize them relative to other concerns such as, say, keeping extremist content from reaching vulnerable users.

In the cases before the Court now, the plaintiffs are suing on behalf of loved ones killed by terrorists in Paris, Istanbul and San Bernardino, California. Aware of the broad civil immunity traditionally granted to publishers by Section 230, they argue that the platforms are nevertheless liable under the criminal Anti-Terrorism Act, which allows U.S. nationals to recover damages for injuries suffered “by reason of an act of international terrorism.” The plaintiffs argue that social media companies have both knowledge and control over the content that appears and is recommended on their platforms, which they contend distinguishes platforms from traditional “publishers” and gives rise to a variety of liabilities. These include aiding and abetting terrorist groups by matching their propaganda with would-be jihadists and materially supporting them by sharing advertisement revenue.

These arguments have nothing to do with how algorithmic recommendations actually work. It is ludicrous, for one, to imagine that platform companies made the active decision to promote terrorist content or to share ad revenue with terrorists. Platforms have content policies that prohibit terrorist activities on their platforms. They perform both proactive and retroactive reviews of terrorist content using a mix of human analysis and automated scaled models. Moreover, social media companies depend on advertisers for revenue, and advertisers generally aren’t keen on having their ads show up next to posts by terrorists.

In the cases under consideration in Gonzalez, videos spreading ISIS propaganda were not removed in a timely fashion and those videos were approved for ads monetization (implying that Google thereby agreed to share revenue with ISIS and ISIS-affiliated users). But there is a difference between a platform being careless about what it promotes and a platform giving its best, imperfect effort. The nature of the internet makes it technically infeasible for social media companies to review every single thing that gets posted.

In June 2021, the 9th Circuit ruled against the survivors’ families in Gonzalez and in a similar case, Clayborn v. Twitter (which also counted Google and Facebook as defendants), stating that the plaintiffs’ allegations were insufficient to overcome the protections granted by Section 230. In a third case, however, Twitter v. Taamneh (which, again, also concerned Google and Facebook), the 9th Circuit found that the plaintiffs did put forward sufficient evidence to show that the platforms had provided or intended to provide substantial support to terrorist groups, and therefore that the protections granted by Section 230 were limited. All three cases were consolidated by the 9th Circuit; Gonzalez and Taamneh will now be considered by the Supreme Court.

Even a narrowly tailored decision in favor of the plaintiffs will be detrimental both to free speech and to the overall quality of social media content. Any moderation scheme capable of operating at the scale of the modern internet must balance recall—finding every piece of harmful content—and precision—correctly classifying each piece of content as harmful or benign. Holding platforms liable for every single piece of terrorist content that slips through their algorithmic cracks will force them to optimize for recall, which will in turn require them either to over-moderate and chill speech or abandon editorial discretion entirely.

Such a requirement would pose an expensive, likely insurmountable obstacle to smaller platforms that don’t rake in billions of dollars per quarter and thus can’t afford to achieve perfect policy compliance—not to mention search engines, streaming platforms, online job boards, news websites, the AI models that automatically finish your sentences in emails, and so much more. These services all rely on recommendation algorithms to match users with content. The court’s siding against the platforms in Gonzalez would immediately throw most of the services we use into legal disarray.

Section 230 has fundamentally shaped how Americans understand the internet by enabling rapid innovation and engendering platforms that create opportunities for people to share and connect across the world. Not all of these innovations have been uniformly beneficial, but how exactly to improve upon them is not something the Supreme Court is designed to figure out.

Congress, however, has a variety of tools at its disposal to do just that. It can begin by requiring platforms to disclose core integrity metrics such as the prevalence of abusive content on their platforms and the time it takes them to act on that content. Similar to how the Obama administration handled fuel efficiency standards, Congress could require platforms to meet a minimum standard within a short time period, then gradually raise expectations as technology improves and companies continue to adapt. Notably, in its ruling on Gonzalez, the Ninth Circuit has already indicated that it would rather Congress decide whether algorithmic recommendations should be covered by Section 230, not the courts.

Changes to the status quo will likely result in a lot of litigation, no matter what those changes actually are. A ham-handed decision by the Supreme Court here could be disastrous for the modern internet as we know it. A Congressional scalpel, wielded in partnership with technical experts in industry and academia, is likely far more effective in limiting the risks posed by these new technologies.

With assistance from Integrity Institute fellows/members Naomi Shiffman, Dylan Moses, Derek Slater, and Maggie Engler. 

Previous
Previous

Gonzalez and Taamneh: What You Need to Know

Next
Next

Misinformation Amplification in the Nigerian Election