Summary: Integrity Institute’s Amicus Brief on Gonzalez v. Google
Dec 9, 2022
In support of neither party, the amicus brief explains major uses of algorithms in large tech platforms and urges the Court to decide narrowly based on technical specificity and nuance.
Overview
The Integrity Institute is a think tank powered by a community of integrity workers, many of whom have helped build recommender systems (colloquially “algorithms”) used in large tech platforms. Our amicus brief, in support of neither party, offers this community’s independent, professional expertise to the Court.
The case of Gonzalez v. Google concerns whether or not recommender systems are covered by liability exemptions under Section 230 of the Telecommunications Act of 1996 for Internet service providers. Backed by the Integrity Institute’s member expertise, our amicus brief explains to the Court how tech platforms actually use algorithms.
Algorithms are core components for tech platforms to function, and platforms use different types of algorithms depending on contexts (such as content recommendation, content moderation, or advertising). Well-designed algorithms can in fact enable a functional and enjoyable experience when using tech platforms.
Given the specificity of different algorithms, this amicus brief urges the Court to focus on the specific facts and algorithms in this case to decide on an individual, narrow basis.
Three types of algorithms explained
This amicus brief explains three types of algorithms commonly used in large tech platforms, as well as their respective harms and benefits, to illustrate to the Court that technical specificity and nuance are of utmost importance in cases involving algorithms and the question of liability.
Algorithms for content recommendations
Platforms may use algorithms to recommend content in ways that personalize recommendations for individual users based on their past behaviors (as well as inferred characteristics) and optimize expected value to the company by maximizing individual users’ expected engagement with recommended content.
When platforms use algorithms to maximize engagement, they cannot fully prevent harmful third-party contents from being recommended to users if those users have consumed similar contents in the past.
Algorithms used for content recommendations can benefit user experience in scenarios such as recommending better matches on dating apps or autofilling URLs on web browsers.
Algorithms for content moderation and safety
Platforms use algorithms to prevent and reduce harms by semi-automating the process of flagging, removing, and re-ranking third-party contents likely to violate platform policies or laws. When this process is performed at scale, the algorithms cannot perform perfectly and are continuously optimized to balance between precision and accuracy.
If a platform prioritizes accuracy over precision in using algorithms for content moderation, its process would have a high false positive rate. Most large platforms therefore choose to prioritize precision over accuracy, which allows most users to post contents but can sometimes lead to extensive harm when false negatives are shared widely.
Algorithms for advertising and commerce
Platforms use algorithms to serve targeted ads to individuals through “retargeting,” which relies on expressed and inferred information about those individuals that the platforms had already compiled.
Algorithms that are used in techniques like “retargeting” primarily benefit companies, and this encourages companies to collect more and more data about users.
Quote
Integrity Institute Executive Director and co-founder Sahar Massachi said, “For years, integrity workers inside platforms have seen people on the outside misunderstand how these systems actually work. This isn’t their fault — the knowledge was locked up and siloed. Now, we are speaking with our own voice. Dozens of members — all professionals in the field – directly contributed ideas for, or vetted the text of, this brief.”
“We are decidedly neutral on the merits of who should win. That is not our role, and our members might disagree. Instead, we are united on how these platforms actually work. We stand ready to be trusted honest brokers, to anyone in society who wants to learn how the social internet can (and does not) help people, societies, and democracies thrive.”
About us
The Integrity Institute is a think tank powered by a community of integrity professionals: tech workers with experience in integrity roles — roles dedicated to fixing harms to people and society within social internet platforms. The Institute cultivates a thriving community of more than 100 integrity professionals with experience on trust and safety, product, integrity, and quality teams across 26+ different platforms – including Facebook, YouTube, Google, TikTok, Twitter, Instagram, Snapchat, WhatsApp, Quora, and Clubhouse. Institute members have observed, and often helped build, the architecture of the social internet, and this amicus brief offers their professional and technical expertise to the Court.
Full text of the amicus brief is available here. We thank All Rise Trial & Appellate for authoring this brief and Reset.tech for financially supporting this work.
###
Contact: Sahar Massachi, hello@integrityinstitute.org