On Risk Assessment and Mitigation for Algorithmic Systems

We are very excited to announce the release of our report “On Risk Assessment and Mitigation for Algorithmic Systems,” which represents six months of work within the Integrity Institute to define what risk assessments, audits, and mitigation plans should include to cover algorithmic systems used by online platforms.

Risk assessments have become a popular policy in legislation and regulation of online platforms. They are called for in the Digital Services Act (DSA) in the EU, the Online Safety Act (OSA) in the UK, the Kids Online Safety Act in the US, and the Age Appropriate Design Code at the state level in the US, and other policies around the world. However, what a risk assessment should cover is still a bit to be determined. Comprehensive risk assessments present a genuine opportunity for policy to change how companies design for, prioritize, and resource the safety of their platforms.

On February 17, 2024, the DSA entered into effect for all digital platforms operating within the EU. This is arguably one of the most ambitious regulatory regimes for tech companies and online platforms, and includes multiple mechanisms, requirements and provisions that are still being clarified through various implementing acts and guidance. Some of the most significant requirements for very large online platforms (VLOPs) and search engines (VLOSES) are enumerated in Articles 34 and 35, and require that platforms carry out comprehensive risk assessments to understand the ways their services pose risks to society across a range of specific systemic risks. Article 35 elaborates requirements that platforms then implement risk mitigation measures to address the risks they’ve assessed. These articles have the potential to be a robust component in incentivizing companies to build their platforms in ways that minimize the negative impacts they can have on people, societies, and democracies.

The goals of risk assessments can be framed as:

  1. Creating accountability for the negative impacts platforms can have on people and societies, and formalizing avenues for the platforms to consider these impacts.

  2. Creating accountability and incentives for the companies to follow best practices in platform design that minimize those negative impacts.

  3. Informing public society about the negative impacts and risks stemming from the platforms, so that public society can take any necessary measures to protect people.

These goals will have a high likelihood of being met as long as the risk assessments are comprehensive. This will require that risk assessments cover the existing and potential negative impacts platforms have on society, how the design of the platform and company governance enable and amplify those negative impacts, and what the company does to understand and mitigate them.

In the following report, we provide a framework for risk assessments of algorithmic components of platforms that provides a foundation for comprehensive platform risk assessments that can accomplish the aforementioned goals. Comprehensive risk assessments will go beyond algorithms and look at the entirety of online platforms and their features, but as platform algorithms are often-mentioned scapegoats of risk and can play a significant role in spreading the content that harms individuals and society, we start with these systems. This report is aimed primarily at policymakers, regulators and other external stakeholders looking to understand how risk assessments can be leveraged as an effective tool for accountability, transparency and ultimately reduction in harm. 

Read the full report here.

Jeff Allen

Jeff Allen is the co-founder and chief research officer of the Integrity Institute. He was a data scientist at Facebook from 2016 to 2019. While at Facebook, he worked on tackling systemic issues in the public content ecosystems of Facebook and Instagram, developing strategies to ensure that the incentive structure that the platforms created for publishers was in alignment with Facebooks company mission statement.

Previous
Previous

Integrity Institute Submits Response to Ofcom’s Illegal Harms Consultation

Next
Next

European Commission Cites Integrity Institute in Its Elections Integrity Guidance