Integrity Institute Input on UN Code of Conduct on Information Integrity

In December 2023 the Integrity Institute submitted comments in response to the United Nations Office of Global Communications’ online consultation on the draft Code of Conduct for information integrity on digital platforms. The Secretary-General outlined principles and recommendations in a policy brief on information integrity on digital platforms

Overall, the Secretary-General’s policy brief and the principles therein outline useful guidance for the multiple stakeholders with interest and responsibilities in promoting information integrity. The Institute’s comments focused on providing information on how to make the principles more specific and comprehensive, particularly when it comes to recommendations for online platforms. These comments focused on areas such as the specific transparency requirements that would be useful to set for platforms, how platforms may implement a human rights framework, and how platforms can invest to support independent media, empower users, strengthen the important role of researchers, and scale up their own responses to information integrity challenges on their services. Specific and comprehensive transparency guidelines will be key for platforms to demonstrably implement these principles. 

The full text of our submission is below.

Principle 1 : Commitment to information integrity 

UN text:

All stakeholders should refrain from using, supporting or amplifying disinformation and hate speech for any purpose, including to pursue political, military or other strategic goals, incite violence, undermine democratic processes or target civilian populations, vulnerable groups, communities or individuals;

Integrity Institute comments:

Recommendations for platforms implementing this principle should include:

Platforms should develop content policies prohibiting disinformation and hate speech on their platforms, in ways that address the problems and align with the company’s values.

To substantiate their commitments to information integrity, platforms should investigate if the design of platform features and algorithmic systems is amplifying disinformation and hate speech, and be transparent about the causes of exposures to disinformation and hate speech on the platform. This should include transparency reports that show how many exposures come from content from accounts users don’t follow or content forwarded/shared from accounts the user doesn’t follow, how many exposures come from accounts users followed due to a platform recommendation, and how disinformation and hate speech content scores in their algorithmic ranking systems.

Principle 2 : Respect for human rights

UN text:

Member States should: 

(i) Ensure that responses to mis- and disinformation and hate speech are consistent with international law, including international human rights law, and are not misused to block any legitimate expression of views or opinion, including through blanket Internet shutdowns or bans on platforms or media outlets;  

(ii) Undertake regulatory measures to protect the fundamental rights of users of digital platforms, including enforcement mechanisms, with full transparency as to the requirements placed on technology companies;

All stakeholders should comply with the Guiding Principles on Business and Human Rights;

Integrity Institute comments:

For platforms, a strong stance on implementation could entail:

Platforms should create and communicate a framework for human rights, including how their platforms can impact human rights. This could include a framework for decision-making around  how to balance respecting human rights against demands and threats from authoritarian governments, and how human rights principles are incorporated into platform design, for example by providing users with recourse in line with restorative justice principles (so giving users and potential violators an opportunity to appeal, learn, improve, and course-correct if they’ve demonstrated bad behavior) and preventing recidivism. 

Their framework should also lay out how human rights issues are prioritized and resourced. This could, for example, inform how human rights issues are balanced when resources are limited, and can help ensure that necessary resources are allocated to protect regions of the world where human rights concerns might be high but business interests low.

Principle 3 : Support for independent media 

UN text:

Member States should guarantee a free, viable, independent and plural media landscape with strong protections for journalists and independent media, and support the establishment, funding and training of independent fact-checking organizations in local languages; 

News media should ensure accurate and ethical independent reporting supported by quality training and adequate working conditions in line with international labor and human rights norms and standards;

Integrity Institute comments:

Platforms can do a lot here, so recommend adding a section aimed at them, e.g.:

Platforms should support independent media:

This can begin with working with local experts in media ecosystems, and providing standard protections to independent media in all, especially risky, situations, e.g., through account protections, or prioritized placement on lists to ensure their content isn’t removed due to harassment and false reporting campaigns. Examples like Meta’s Trusted Partner Program should (1) be expanded to include all relevant organizations and (2) have requirements around contributions to content policy reviewed, given the impartiality of some trusted partner non-profit organizations.

In addition, platforms, which often control traffic and attention (and thus, often, revenue) to publishers, should ensure that this distribution of publishing revenue is done responsibly. This can mean ensuring that accurate, original, and local reporting performs well under their ranking algorithms, and that inaccurate, unoriginal, and stolen content performs poorly. 

Platforms also often have monetization programs, where they directly share revenue with publishers and content creators. Standards should be set in these programs to ensure that independent media can have access to monetization, and that low quality publishers that post inaccurate content or content stolen or repurposed from independent media are kept out.

Principle 4 : Increased transparency

UN text:

Digital platforms should: 

  1. Ensure meaningful transparency regarding algorithms, data, content moderation and advertising; 

  2. Publish and publicize accessible policies on mis- and disinformation and hate speech, and report on the prevalence of coordinated disinformation on their services and the efficacy of policies to counter such operations; 

News media should ensure meaningful transparency of funding sources and advertising policies, and clearly distinguish editorial content from paid advertising, including when publishing to digital platforms; 

Integrity Institute comments:

Recommendations for platforms to implement comprehensive transparency would be impactful across the principles. 

Platforms should release information on the scale, nature and causes of the spread of harmful content on their platforms. Detailed Recommendations

https://integrityinstitute.org/s/Metrics-and-Transparency-Summary-EXTERNAL.pdf

https://integrityinstitute.org/s/Ranking-and-Design-Transparency-EXTERNAL.pdf 

Scale: How many impressions are there on policy violating content, and how many people are exposed to known violating content monthly? What is the percentage of all impressions that are on violating content?

Cause: Why are users seeing violating content? Are platform design and algorithms playing a role? Include information about algorithms (important features for ranking, user data used, topline metrics to evaluate systems, how risky content performs), and the fraction of impressions on violating content coming from recommended content or accounts vs accounts a user chose to follow. 

Nature: Who is seeing harmful content? What is the frequency distribution of impressions on violating content, e.g, how many users are exposed to 1, 2, 3 pieces of violating content per month? Are the exposures evenly distributed among users, or is there a subset who are seeing more? What are possible factors why?

Platforms should also provide datasets: For external accountability, platforms should release datasets for specific time periods containing: at least the top 10,000 pieces of public content (based on impressions or views) on the platform per week; and a random sample of impressions on public content, which contains at least 10,000 samples.

Principle 5 : User empowerment

UN text:

Member States should ensure public access to accurate, transparent, and credibly sourced government information, particularly information that serves the public interest, including all aspects of the Sustainable Development Goals;

Digital platforms should ensure transparent user empowerment and protection, giving people greater choice over the content that they see and how their data is used. They should enable users to prove identity and authenticity free of monetary or privacy tradeoffs and establish transparent user complaint and reporting processes supported by independent, well publicized and accessible complaint review mechanisms;

All stakeholders should invest in robust digital literacy drives to empower users of all ages to better understand how digital platforms work, how their personal data might be used, and to identify and respond to mis- and disinformation and hate speech. Particular attention should be given to ensuring that young people, adolescents and children are fully aware of their rights in online spaces;

Integrity Institute comments:

Platform transparency is crucial to user empowerment. Platforms should provide users with information on the signals and features used to rank and recommend organic and advertising/sponsored content, including general descriptions of their recommendation algorithms (what the most important features are, which features use data collected about the user or their past actions on the platform). Users should also have the explicit option to turn off personalized recommendations. All user control options should be clear and easily accessible. 

However, platforms need to balance personalization with quality control:  

On one hand, personalization features such as blocking and muting are essential for individual safety. These tools empower users to tailor their online experience and protect themselves from harmful or unwanted content.

On the other hand, companies have a responsibility to ensure their algorithms are unbiased and do not inadvertently lead users down harmful paths – e.g., personalization tools could lead users to entrench themselves in conspiracy content, for example. Thus, robust quality control measures are needed that go beyond mere personalization, and ensure that while users have control over their experience, they are also presented with content that is fact-checked, reliable, and diverse, preventing echo chambers or the spread of misinformation.

Principle 6 : Strengthened research and data access

UN text:

Member States should invest in and support independent research on the prevalence and impact of mis- and disinformation and hate speech across countries and languages, particularly in underserved contexts and in languages other than English, allowing civil society and academia to operate freely and safely;

Digital platforms should:

(i) Allow researchers and academics access to data, while respecting user privacy. Researchers should be enabled to gather examples and qualitative data on individuals and groups targeted by mis- and disinformation and hate speech to better understand the scope and nature of harms, while respecting data protection and human rights; 

(ii) Ensure the full participation of civil society in efforts to address mis- and disinformation and hate speech;

Integrity Institute comments:

Platforms should acknowledge the important role that external researchers, civil society, and independent media play in mitigating information integrity harms. External stakeholders fill gaps in specialized or local knowledge, provide additional identification of harms on platforms, and help address risks arising from the power imbalance inherent in an ecosystem where a handful of companies determine the precise boundaries of what content people see. Deliberative approaches to platform governance, including citizen panels, assemblies, or independent oversight bodies, or crowdsourcing mechanisms to make or inform content moderation decisions or policies, can help mitigate these risks.

Platforms should provide support for all the goals data access can serve. For example, they should distinguish between academic researchers and civil society researchers as they both require data access, but have different goals and needs. Academics might need more sensitive internal data to study the platform’s response to harmful content and impacts on users. Civil society needs real time access to public content and the ability to archive data for long term reference.

Platforms should ensure that interactions with researchers and civil society are mutually beneficial and that access to the data is easily accomplished (affordable, accessible, with clearly defined parameters on how to obtain access).

Principle 7 : Scaled-up responses

UN text:

All stakeholders should: 

(i) Allocate resources to address and report on the origins, spread and impact of mis- and disinformation and hate speech, while respecting human rights norms and standards and further invest in fact-checking capabilities across countries and contexts; 

(ii) Form broad coalitions on information integrity, bringing together different expertise and approaches to help to bridge the gap between local organizations and technology companies operating at a global scale; 

(iii) Promote training and capacity-building to develop understanding of how mis- and disinformation and hate speech manifest and to strengthen prevention and mitigation strategies; 

Integrity Institute comments:

Specific principles for platforms could be: 

Platforms should devote sufficient resources to tackle disinformation and hate speech in all regions where their platform is used to share content, using human rights prioritization frameworks to allocate limited resources. This includes ensuring that at-risk regions are properly protected, even when there isn’t a strong business case to do so.

Platforms should engage meaningfully with external researchers, civil society, and independent media, in line with their role in globally mitigating harms from disinformation and hate speech (as described under Principle 6). Platforms should ensure that interactions with external stakeholders are mutually beneficial and that stakeholders are empowered rather than used by the platforms without support. This includes ensuring such engagement provides timely benefit. For example, reports on influence operations should be delivered as soon as possible to allow civil society to effectively respond, rather than months after the fact. 

Platforms should sign onto the Code of Practice on Disinformation and provide necessary data and access for meaningful assessment of the spread of mis/disinformation (in line with comprehensive transparency guidance). The Code also provides guidance on how platforms can interact with and support civil society.

Principle 8 : Stronger disincentives

UN text:

Digital platforms should move away from business models that prioritize engagement above human rights, privacy and safety; 

Advertisers and digital platforms should ensure that advertisements are not placed next to online mis- or disinformation or hate speech, and that advertising containing disinformation is not promoted;

News media should ensure that all paid advertising and advertorial content is clearly marked as such and is free of mis- and disinformation and hate speech;

Integrity Institute comments:

Implementation requires transparency: platforms should report the topline metrics for evaluating their ranking and recommendation systems, and their definitions. Metrics shouldn’t be limited to engagement (or proxies for engagement), but include integrity metrics, such as the number of exposures to and prevalence of disinformation and hate speech.

As engagement-based ranking is linked to the amplification of harmful content, platforms should use other inputs into their algorithms, such as an objective, content quality assessment that aligns with company values and mission.

Platforms should create positive incentives for content creators and publishers everywhere. Creators in their monetization programs should pass a high quality bar, including checks against their accuracy and adjacency to hateful content. Platforms should also ensure that their systems do not reward the sharing of misinformation and hate speech with additional distribution and engagement. Platforms should also support the work of external stakeholders to better understand the incentives the platform is creating globally.

Governments can create incentives for platforms and ensure accountability. Government-required risk assessments and audits of platforms can help understand the level of risk and extent to which platform design choices are exacerbating/mitigating, and verify claims made by platforms regarding mitigation measures, user empowerment and best practices.

Principle 9 : Enhanced trust and safety

UN text:

Digital platforms should:

(i) Ensure safety and privacy by design in all products, including through adequate resourcing of in-house trust and safety expertise, alongside consistent application of policies across countries and languages; 

(ii) Invest in human and artificial intelligence content moderation systems in all languages used in countries of operation, and ensure content reporting mechanisms are transparent, with an accelerated response rate, especially in conflict settings; 

All stakeholders should take urgent and immediate measures to ensure the safe, secure, responsible, ethical and human rights-compliant use of artificial intelligence and address the implications of recent advances in this field for the spread of mis- and disinformation and hate speech. 

Integrity Institute comments:

Platforms should follow design practices that mitigate exposures to disinformation and hate speech, such as limiting user exposure to content from accounts they don’t follow, ensuring recommended content meets a very high quality standard, not using engagement-based ranking, setting strong privacy default settings, limiting direct contact with strangers.

Platforms should ensure that metrics that track the safety of the platform are considered in how the company measures its success, how the company determines its strategy, and when the platform decides to make changes. Safety metrics should be comprehensive and cover the scale, cause, and nature of harms on the platform. Platforms should also measure their company performance over the long term, not just based on short term engagement or time spent on the platform.

Platforms should also ensure they have positive metrics to evaluate success and platform systems that are not based upon platform usage and engagement, and are aligned with company mission and values. This could include metrics around content quality, user surveys, or clearly positive user experiences.

Section 10 : Other principles to suggest

Integrity Institute comments:

Investment in multi-stakeholder approaches: (building out from principle 6) Platforms and governments should acknowledge the important role that external researchers, civil society, and independent media play in globally mitigating information integrity harms, and invest in meaningful, mutually-beneficial models of engagement around solving (and regulating) platform harms. This is important to address risks that arise from the power imbalance inherent in an ecosystem where a handful of companies are determining the precise boundaries of what content people see. The power imbalance risks could be mitigated by using deliberative approaches to platform governance, including creating citizen panels, assemblies, or independent oversight bodies, or using crowdsourcing mechanisms, to make or inform content moderation decisions or policies.

Section 11 : Suggestions for methodologies of implementation

Integrity Institute comments:

Specific and comprehensive transparency guidelines will be key for platforms to demonstrably implement these principles. There are opportunities to tie these to existing mechanisms with transparency obligations (e.g., EU Code of Practice on Disinformation, or DSA transparency dashboards) so as not to be overly duplicative, but they should look beyond regional regulation. There is an opportunity for the UN to set a globally-applicable standard for what transparency from platforms is needed to understand and address information integrity harms, and what constitutes fair and equitable resourcing towards mitigating these harms. We included recommendations on information that is useful and necessary to ensure real transparency from platforms on these points throughout our comments. 

There is also a need to safeguard against abuse/manipulation of the principles or related recommendations by governments or other actors looking to use them to justify human rights violations, including suppressions of journalists and expression or increased general surveillance. Transparency from governments on requests made to platforms and appropriate rule of law processes to support them can help; more robust safeguards should be explored specific to each principle.

Previous
Previous

Red Herrings To Watch For At The Senate’s Child Safety Hearing

Next
Next

Global Elections Playbook: AI Edition