Ranking by Engagement
By Tom Cunningham. Integrity Institute Founding Fellow. We are proud to cross-post an excerpt of original research. To read the full piece, please visit Tom’s website here.
Six observations on ranking by engagement:
Internet platforms rank content primarily by the predicted probability of engagement. The platform will select for each user the items that are most likely to make the user click, or reply, or retweet, etc.
Ranking by predicted engagement increases user retention. In experiments which compare engagement-ranked feeds to unranked feeds (“chronological” feeds) the users with engagement-ranked feeds consistently show substantially higher long-run retention (DAU) and time-spent. Individual teams often have targets to increase short-run engagement but the most important leadership metric is long-run retention, and leadership would generally be willing to sacrifice significant amounts of engagement in return for small increases in retention.
Engagement is negatively related to quality. The most-engaged-with content on many platforms is often objectively low quality: it’s full of clickbait, spam, scams, misleading headlines, and misinformation. Platforms have found that they can increase retention further by combining engagement predictions with “quality” predictions – using classifiers or proxies to identify content that is engaging but delivers a poor experience to users.
Sensitive content is often both engaging and retentive. Nudity, bad language, hate speech, and partisan speech (“sensitive” content) are often amplified by engagement-ranked feeds, but unlike low-quality content they typically attract users rather than repel them, i.e. they increase retention.
Sensitive content is often both engaging and preferred by users. Platforms have tried out many experiments with asking users directly for their preferences over content. The results have been mixed, and platforms have often been disappointed to find that users express fairly positive attitudes towards content that the platform considers sensitive.
Platforms don’t want sensitive content but don’t want to be seen to be removing it. Having controversial or harmful content attracts negative attention from the media, advertisers, app stores, politicians, regulators, and platform employees and investors. But platforms are also liable to get negative attention when they make substantive judgments about the sensitivity of content, especially when it has some political aspect. As a consequence platforms target sensitive content indirectly, when possible, using other proxies which correlate with the sensitivity, and they prefer to justify their decisions by appealing to user retention or user preference.
In an appendix I formalize the argument. I show that all these observations can be expressed as covariances between different properties of content, e.g. between the retentiveness, predicted engagement rates, and other measures of content quality. From those covariances we can derive Pareto frontiers and visualize how platforms are trading-off between different outcomes.
To read the full piece, please visit Tom’s website here.