The Art of the Block

In discussing content moderation in social media, I’d like to think more about the agency of the user. Commentators have mostly focused–correctly in my opinion–on the apparent power of tech platforms to allow or remove, and to amplify or reduce, the media their users produce. But! Content moderation is a normative game, and there is no victory in a normative game - only differently-shaped balancing acts.

So far, balance has been struck (if it has been struck) by the platforms themselves. We know by now how this all works: leaders of social tech write rules for what’s allowed, and then hire manual reviewers and build automated classifiers to identify and remove bad stuff according to those rules. These approaches to the challenge of #como are well enough understood now, they’re speed-runnable, and even programmable as a game! Moreover, many of the best practices for a platform managing and explaining its #como decisions is now law, codified by the Digital Services Act in the EU.

What, then, should we expect of ourselves - the users of social media? More user controls, and The Block chief among them, are the next frontier in content moderation at scale, but only if we use them.

New Norms in 2023

The simplest “user agency” over content moderation is the most powerful: voting with our feet. Why not just leave the platforms that offend us either with their censoriousness or with their negligence? In a field that has been characterized by monopolistic “walled gardens,” this has seemed impossible.

That changed last year. Successive waves of migration from Xitter to Mastodon, Bluesky, and Threads gave us something of a natural experiment in a “market for rules.” And in each place, new cultures are evolving. A certain norm that seems to be prevalent on Threads, for example, is the adage, “block early, block often.”

We have stigmatized this sort of “cut ‘em off” action implicitly, through our relentless championing of the mores of free speech, and explicitly, through our cries of “shanda!” when we learned someone has blocked someone else. Nevertheless, the tide seems to be changing: for teens, the block is increasingly the preferred response over submitting a report.

We should normalize Blocking, and use it as so much more than a last resort.

Sorryrealquick- What is blocking?

Let’s get technical. A “block” is an action a user can take on any given social media app to prevent another user from contacting them. The details of what a “block action” entails can vary quite a bit by platform - but essentially they all provide users with some ability to hang up on someone else.

But a Block is more elegant than that, especially in the context of recommendation systems (TikTok etc.) and microblogs (Xitter etc.). Its power has a lot to do with two concepts underlying social tech: The Graph and Inventory.

The Graph

Social tech relies on a key foundational structure to work at all - “the social graph,” or “network.” The more people on a service, and the more connections between the people, the bigger and better the social graph is on that service. More than its buttons and colors, the single biggest factor in an app’s success is simply “Are there other people here? Are they cool?”

In network theory, each person in a group is called a “node,” and any connection between two people is called an “edge.” We describe a graph as more open if it has lots of edges, and more closed if it has fewer edges.

A block is a severing of an edge in a social graph. It incrementally makes a user’s social graph more closed.

Inventory

The set of all the words and pictures and videos that might reach you in a social network is called your “inventory.” If you’re on a platform that doesn’t recommend stuff, then your inventory is simply the content you and your friends post or send to each other. In places like Tiktok, Instagram, Threads, and Twitter, though, “inventory” is the set of all content produced by the entire social graph on the app. A feed ranks your inventory based on how likely you are to spend time with each item in your inventory.

So if blocking removes a node from your social graph, it therefore also reduces the size of your inventory: if I block Lusa, I won’t see any content Lusa shares.

Blocking Is Good

On Xitter, the Block has often been a last resort you might take when the vilest bullies target you with their sexist, racist vitriol. We alternatively celebrate it for its simple effectiveness and bemoan that it doesn’t go far enough in the battle against trolls. Let’s take three reasons blocking may be seen as bad, and flip the script on them.

1. Blocking is extreme! I have cut someone off completely - isn’t that excessive?

In general, social tech must build a big, open social graph fast in order to be successful. When a user joins, they are more-or-less immediately faced with the entire universe of users and content on that platform. That’s a big (lotta nodes), open (lotta edges) social graph, billowing up an even bigger inventory. Good app design mediates the extent to which that experience is like drinking from a firehose, but the default position is still “all-first.”

I believe that it is the “all-first” world that is extreme; that humans aren’t socially developed - to say nothing of cognitively developed - to succeed in a big open social network. And so Blocking is not extreme, when considered beside that maelstrom: it is instead an appropriate, practical tool for a person confronted with an extremely big, loud social environment. Indeed, it might even be a public health intervention.

2. Blocking is anti-free-speech! I value freedom of expression like it’s a religious virtue - so isn’t blocking a sin?

Free speech is great, and it’s also a completely different concept from mandatory listening. Our inherited wisdom of radical, unfettered expression driving a “marketplace for ideas” needs updating for a digital world - one characterized by an unfathomably big social graph with high-speed inventory.

A helpful neologism here is, “free speech isn’t free reach,” and there’s some wisdom to this snarky rhyme. It allows for a sort of Speaker’s Corner to endure, a place where ideas brilliant and stupid, uplifting and harmful may spout forth freely; while also preserving a key mechanism of that age: the air carries a shouted voice only so far. Social media removed that natural constraint, and the Block is among the tools that helps restore it.

3. Blocking is futile! There are so many trolls, meanies, dummies, and bad faith blowhards - surely blocking each one is hopeless! Crappy noise and harassment is just part of social media.

An added benefit of blocking is that it provides a “signal” back to the product managers. For instance, if 60% of the people who have seen my content block me, my content had better be down-ranked in other people’s feeds, hadn’t it? Savvy engineers will factor these kinds of signals into recommendation systems, which means your block is actually a pro-social good deed! You have not only protected yourself from someone you find to be unpleasant, but you have helped the ecosystem marginalize that unpleasant person even more.

I would guess today that a large majority of users across social media platforms have blocked only a few handful of other users (perhaps some random spammy or porn accounts, for instance). That means the aggregate social graph is almost entirely intact. What would social media feel like for users if blocking were as common as more positive signals, such as posting, clicking “like,” or following? What if your social graph, and your friends’ social graphs, were much smaller - hundreds of thousands of people instead of hundreds of millions? We shouldn’t think of a user controlling their own experience with social tech as a futile vision.

Platforms: Blocking is Good for You, Too

It might seem to the platform owner that users hacking away at the social graph is existentially bad. So much of the business value is in this audience, which we monetize (mostly) through ad recommendations! If users are closing doors left and right, won’t our user growth and engagement stall, and our core adtech dynamics break?

It’s true that quantity of engagement–the number of edges in the graph–is a lot easier to model than quality of engagement. But think instead back to content moderation. On the one hand, a platform can’t be so strict with speech rules that people leave; nor invest so much in the considerable operations and technology of moderating content at scale that it eats into margins (the board purses its lips at this). On the other hand, a platform can’t be so loose with speech rules that people leave; or constant crises facing a beleaguered VP of Communications, and ultimately the CEO, pile up. So between Scylla and Charybdis: user controls.

Giving users controls to moderate their own content, and encouraging them to do so, is the just-right middle way because:

  1. It’s scalable. The ratio of content moderator : user in social media today is vanishingly small, and that ratio drives simple bottlenecks when managing user complaints. Consider instead that every user is their own content moderator - and equip them accordingly. Rather than demanding of the platform’s Trust-and-Safety apparatus to remove content and other users, the user can do so themself.

  2. It’s devolved. I get it, CEO: you’re a software engineer. Or inventor. Or business wiz. You are profoundly uninterested in the project of governing speech. Begrudgingly have you found yourself in the Trust and Safety game at all, and why can’t people just be cool? The Block is here to help! This is a philosophically libertarian tack, after all: people will consume and reject what they want. It’s not - and shouldn’t be - up to platform owners alone to decide.

  3. It’s a scalpel, not a sledgehammer. If “severing an edge” sounds too brutal, pair it with a suite of additional, related actions: temporary mutings, “restrict” actions, “don’t recommend,” and so forth, to afford users plenty of more nuanced opportunities to manage their experiences. A decent investment in user experience research (UXR) will tell you a lot about how your users want to treat their networks.

Beyond Blocking

On big open social networks, I really do believe in blocking as frequently as following, and think that’s a radical cultural change that needs more traction. Why not prune ourselves out of extremely large, extremely loud environments, where our very safety is at stake?

But this isn’t really a piece about blocking: it’s about user controls generally. And it’s not really about user controls: it’s about devolving power (and risk) away from platform owners. And it’s not really about devolving power: it’s about how our experiences in life are always some mix of individual agency on the one hand and structural parameters on the other.

Platforms need to protect their users adequately. That certainly includes a well-funded Trust and Safety apparatus, but it also includes equipping their users with pruning tools and volume controls for that “last mile” management of inventory. These capabilities need to reflect the virulent, adversarial space that characterizes huge chunks of the graph: for example, a block action should also block all other accounts that person controls (“alts”); a block action should not give any reasoning to the blocked account; and so forth.

As the Digital Markets Act ushers in more open protocols and interoperability, and as the Fediverse takes off, and as more middleware (such as Block Party) enters the fray, I hope we’ll create more transparency and dialogue around the social graphs that underlie social tech. Where are the edges, and how often are people controlling their own? Which app designs encourage an individual’s management of their own network, and which ones disempower them? What is “safe enough” to expect of social tech? And what does it take to change an online cultural norm? For a start, next time your blood pressure rises at something you saw online, don’t wait: block.


Author Biography

Abe Katz is an educator interested in how public and private forces shape big social problems. Since 2018, he’s explored these questions in social tech. He worked on feed ranking problems, responses to misinformation, and high-profile content moderation decisions with the Oversight Board while at Meta. Today, he is the product policy lead at Discord. LinkedIn; @katz.abe@threads.net; @abekatz@indieweb.social

Abe Katz

Abe Katz is an educator interested in how public and private forces shape big social problems. Since 2018, he’s explored these questions in social tech. He worked on feed ranking problems, responses to misinformation, and high-profile content moderation decisions with the Oversight Board while at Meta. Today, he is the product policy lead at Discord. LinkedIn; @katz.abe@threads.net; @abekatz@indieweb.social

https://www.linkedin.com/in/abekatz/
Previous
Previous

Exploring the Depths of Online Safety: A Critical Analysis from the US Senate Judiciary Hearing

Next
Next

Red Herrings To Watch For At The Senate’s Child Safety Hearing