Middleware and the Customization of Content Moderation

By Integrity Institute member Maggie Engler

The power of social media platforms to moderate content has come under increasing scrutiny in the past several years. Platforms create content policies for a variety of reasons: to comply with regulations or appease public figures, to keep the platform brand-safe for advertisers, to maximize user engagement by fighting spam, and to otherwise prevent content judged harmful to society. The prevalence of online harassment, abuse and misinformation, combined with public pressure — especially related to elections and public health issues like COVID-19 — has led to platforms creating new policies against these types of content. Balancing the limitation of harm with the promotion of free speech is a defining challenge, especially since these principles are valued differently in different contexts.
In the United States, government restrictions on platform content standards are very likely to be considered a violation of the First Amendment. In the European Union and other countries around the world, we continue to see new content regulations that transfer responsibility from tech companies to governments on deciding where to draw the line (though people are still bound to disagree on where that line should be drawn). Regulators globally seem to object to how platforms organize, recommend, and moderate content. But what if anyone who wanted to could make those decisions for themselves?
Middleware, a new layer of software that puts content moderation in users’ hands, could be a promising step towards this ideal. The term middleware originated in a paper from Stanford Cyber Policy Center, though the basic concept has percolated in other forms for years. Fukuyama et al. refer to “software and services that would add an editorial layer between the dominant internet platforms and internet users,” and write that they “view middleware as an opportunity to introduce competition and innovation into markets currently dominated by the principal internet platforms.” Middleware effectively layers on top of existing social media, but allows for a more customizable experience, one that is mediated not only by Twitter or Facebook, but by other companies or independent developers. 
One early example was Gobo Social, an experiment born out of the MIT Center for Civic Media in 2017, and currently offline, transitioning to a new home at the University of Massachusetts Amherst’s Initiative for Digital Public Infrastructure. Gobo allowed users to connect up to three accounts on different social media platforms to a single page, showing a combined feed of updates. This content could then be filtered or prioritized in different ways, with dials at the top that let the user turn up “seriousness” or turn down “rudeness.” Gobo’s creators wrote that their intention was to ”change the conversation on social media and imagine a better version of it.” And in this case, better was defined not by the provider, but by the individual user.
Part of the problem people have with platforms’ moderation decisions is that although they are private actors, it doesn’t always feel like it. In many parts of the world, Facebook is the internet, with all the authority that that entails. But with middleware, or a related proposal that Mike Masnick calls "Protocols, not platforms," the decision making moves away from centralized platforms and towards individual users, or third parties those users select from a wide variety of options. Content moderation becomes truly competitive.
Another major critique levied at existing internet platforms is their optimization for engagement. However, with middleware, one could envision optimizing for any number of qualities implemented by a variety of organizations. As Daphne Keller, the director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, writes, “Users might choose a racial-justice–oriented lens on YouTube from a Black Lives Matter–affiliated group, layer a fact-checking service from a trusted news provider on top of Google News, or browse a G-rated version of Facebook from Disney.” 
The middleware approach certainly doesn’t address all of the issues of content moderation: it empowers people and gives them greater control over what they see on social media, helping people see less of what they don’t want to see, and more of what they do. Platforms would still need to contend with the fact that some people want to see, share, and build communities around things like terrorist or dangerous extremist propaganda or self-harm. Therefore, policies at the platform level would still likely need to exist, but middleware could sit atop those broad enforcements and help users create the experience they’re seeking. 
A widespread implementation of middleware would require some basic standards for data portability and interoperability. Though this may seem far-fetched given our current landscape, the OECD released a report last year on considerations for data portability and interoperability measures with the explicit goal of promoting competition in digital platforms markets. Such changes would also reduce the impact of lock-in, making it easier for users to switch between services if they found new providers that they prefer.
Currently, some platforms do not authorize any partners to build middleware, while others are more permissive. As an example, Twitter supports middleware that is approved through its developer toolbox. There are only eleven tools as of this writing, but they introduce a lot of new possibilities. For example, Block Party is an anti-harassment solution that lets users create custom filters, blocklists, and other features that aren’t supported by Twitter. Block Party and other developers are vetted by Twitter to ensure that they meet certain privacy and security standards, but control their own monetization structures and pricing.
At this stage, it’s hard to tell how big the market for middleware might be and there remain uncertainties about how it should work. Keller identifies four big unresolved issues: how feasible it is for competitors to process vast amounts of platform data, at a latency that doesn’t render middleware unusable; how middleware providers could make money, from subscriptions to potential revenue-sharing models, since a competitive ecosystem is unlikely to evolve without financial incentive; the costs of curation, and especially whether middleware will be able to leverage content moderation tools and resources from larger platforms; and privacy considerations, or how much data should be shared by platforms to make middleware effective, while protecting their users’ privacy.
In the existing Twitter tools, data processing has proved possible with the limited amount of data shared under the user agreements. Perhaps the biggest questions are whether other platforms will follow suit in their approach, and whether the average user, long accustomed to receiving most services for free through ad-supported business models, will be interested in paying the few dollars per month that most of the services charge. However, in a conversation where it often seems impossible to please everyone and where everyone may balance freedom of expression against “lawful but awful” speech differently, handing control back to users seems like a promising avenue. To borrow the slogan of Gobo Social, “Your social media. Your rules.”
Previous
Previous

Misinformation Amplification Analysis and Tracking Dashboard

Next
Next

Widely Viewed Content Report Analysis, Q2 2022