Comment on Meta’s Approach to the Term “Shaheed”

In March 2023, the Oversight Board announced that it has accepted Meta’s request for a policy advisory opinion on the company’s approach to moderating the Arabic term “shaheed,” when used to refer to individuals it classifies as dangerous, including terrorists. To provide this opinion, the Oversight Board invited public comments on Meta’s request.

Integrity Institute members Shahyan Ahmad, Jeff Allen, Talha Baig, Swapneel Mehta, and Theo Skeadas prepared these comments, and the Institute submitted them to the Oversight Board on the members’ behalf on April 17, 2023.


As Integrity Institute members, we welcome the opportunity to provide comments on PAO 2023-01. Meta’s approach to “shaheed” as praise undermines freedom of expression by limiting critical discourse in conflict zones and beyond, and fails to account for the term’s positive meaning across regions, languages, and dialects. Undermining freedom of expression adversely impacts freedom of assembly, the right to political participation, and non-discrimination, and can further distort the international community’s understanding of complex social issues.

Instead of the three policy options that Meta submitted to the Oversight Board for consideration, we recommend removing the term “shaheed” as a content moderation signal entirely. The current policy, as well as the three recommendations, all are likely to lead to over-enforcement:

  • From the technical side:

    • Any automated systems will learn to over-enforce, either based on the term “shaheed” or other non-unique names on the dangerous individuals list

    • Content moderators are also likely to over-enforce systematically, especially if they cannot confidently identify any names in posts with individuals

    • Over time, any lists of dangerous individuals would get polluted by benign content, as platforms err on the side of recall.

  • When cultural specificity meets the day-to-day operations of content moderation:

    • Any policy that still uses “shaheed” as a content moderation signal would lead to scenarios where content critical of dangerous individuals getting flagged

    • There remain systemic issues around how we designate dangerous individuals and organizations, and this systematic bias is reflected in the application of “shaheed”

Below, we provide additional details for each of the Oversight Board’s five public comment requests that support our recommendation:

Examples of how Meta’s current approach to “shaheed” as praise impacts freedom of expression on Instagram and Facebook, especially for civil society, journalists, and human rights defenders in regions where the word is commonly used.

Meta’s current approach to “shaheed” results in false positive removals of content from news providers, spiritual guidance, or individuals marking moments of cultural, personal or religious importance. This inhibits critical discourse and could be perceived as unfair bias.

  • “Shaheed” is frequently used by marginalized Muslim groups to refer to members of their community who are murdered in acts of religious violence, including the Rohingya, refugees who have been forced out of Myanmar due to religious persecution. 

  • “Shaheed” is used for those who may pass away from nonviolent causes while in the act of performing a religious duty or tasks on behalf of a religious organization. 

  • “Shaheed” is commonly used in countries like Pakistan to refer to those who died in a secular line of duty, be they assassinated politicians, soldiers, or police officers.

Research into the connection between restricting praise of individuals associated with terrorist organizations on social media and the effective prevention of terrorist acts.

Restricting praise of individuals associated with terrorist organizations on social media does not imply the prevention of terrorist acts.

  • Research and empirical evidence suggest such solutions tend to negatively impact everyday users more than bad actors who adapt to bypass naive content-based filters through coded language; solutions like ‘ethical scaling’ provide better suggestions for content moderation.

  • Minority populations often bear the brunt of sweeping policy changes.

  • There are too few content moderators speaking underserved languages. For example, Meta had only one Burmese-speaking content moderator to monitor the posts by 1.2 million active Burmese users in 2014, and action was only taken against “2% of the hate speech on the platform” in 2019.

How Meta should account for the variety of meanings and diverse cultural contexts for using the term “shaheed” in different regions, languages and dialects, given the trade-offs inherent in enforcing content policies at scale, and the implications for Meta’s responsibility to respect human rights.

The term “shaheed” has a positive meaning across regions, languages, and dialects, with some regional differences. For example:

  • In Turkey, “shaheed” could refer to sacred martyrdom.

  • In Azerbaijan, “shaheed” in Azeri means a victim who was killed by criminal actions or someone who dies in a war.

  • In India, “shaheed” means “martyr”, but in a “good way”. A “shaheed” has a high place in the hearts of Muslims globally. Further, some of the farmers who died during the 2020-2021 farmers protest were called “shaheed.”

  • In Pakistan, “shaheed” is a term that says someone had died, such as with the death of Benazir Bhutto. “Shaheed huay” is a common part of News Urdu idiom.

  • In Egypt, “shaheed” can refer to anyone who dies unexpectedly due to external causes. For example, someone who dies in a fire could be considered “shaheed.” 

  • In Tunisia, “shaheed” is regularly used in non-religious contexts. In the Tunisian revolution, a Tunisian man shouted مجد الشهداء (translation: glory to the martyrs). There were no religious connotations, as this was in reference to those killed by Ben Ali's forces.

  • In Singapore, if you die during Ramadan, then you are considered as dying “shaheed,” in a positive way.

What processes and safeguards should be in place to mitigate the risks of under- or over-enforcement of the Dangerous Individuals and Organizations policy, in particular across diverse cultures, languages and dialects.

We suggest numerous approaches to mitigate the collateral risks of this policy, starting with a foundational review.

  • The word “shaheed” and its translations should be reviewed consistently across all languages. The negative impact of Meta's enforcement of this term could be somewhat mitigated if they are able to demonstrate uniform enforcement on similar terms in other languages and cultures.

  • We agree with BSR’s recommendations around “determining the market composition needed for rapid response capacities, the routing of potentially violating Arabic content to reviewers by dialect and region, improving classifiers, means to track hate speech based on type, and enhancing content moderation quality control processes to prevent large-scale errors.”

  • Meta should provide a robust and extensible reporting interface to allow contests of enforcement decisions related to this term including past reports.

  • Meta can consider creating a consortium to address this issue similar to the Global Internet Forum to Counter Terrorism (GIFCT).

How to measure the accuracy of policy enforcement in this area, including in the use of automation, to counter the potential for bias or discrimination, and how to reflect this in transparency reporting and/or enable independent researchers access to relevant data.

Transparency reporting and enabling researcher access to relevant data can improve accuracy and reduce bias of policy enforcement in this area. 

  • Meta should publicly report its content moderation activity consistently across languages in this space, including comprehensive data on user reports, action rate, types of action, efficacy of mitigation techniques, training information and appeal rates (submitted and approved). 

  • Meta can provide privacy-preserving data sets to independent vetted researchers and civil society organizations, to give insights into how policy is being applied. One model for this is the Twitter Moderation Research Consortium.

Previous
Previous

Ranking by Engagement

Next
Next

How the World Changed Twitter, in 25 Tweets