Questions for Platforms on Child Safety for Congressional Record

Contributing authors: Jeff Allen, Abby Lawson, Spencer Gurley, Alexis Crews, Jenn Louie, Matt Motyl, Gabe Freeman, Vaishnavi J, Sarah Oh, Davinia Santimano, Sarah Vieweg


Following the January 31 Senate hearing on child safety online, Senators will be sending written questions to the CEOs of tech companies. These questions and the CEOs’ answers will be included in the written record for the hearing.

Integrity Institute has been watching the hearing closely and providing independent expertise and analysis. Prior to the hearing, the Institute published a comprehensive list of recommended best practices for tech companies to protect child safety online. Following the hearing, the Institute reacted to the Senators’ lines of questioning.

Based on the themes that emerged from the hearing, the Institute has now prepared additional questions that could serve as a guide for Senators’ written questions to the CEOs of tech companies. The list of additional questions is included in full below.


Theme 1: How much harm do children experience online?

Goal: Society currently does not have enough data about the true scale, cause, and nature of harms occurring on platforms. Most transparency reports showcase what companies took action against, but omit the number of people exposed to harmful content. This is because platforms are not voluntarily offering up that data in their current transparency reports, which isn’t surprising, because that data could be damning. But there is a large amount of data that society needs from platforms to assess the impact they are having on society. These questions will cover that data.

Priority Questions

For each of the policy violation types CSAM, CSAM Solicitation, Suicide and Self Injury, Pro-Eating Disorders, and Bullying and Harassment, please report:

  • How many total views did known violating content receive in the past 30 days? And what % of those views were from users known to be under the age of 18?

  • How many users were exposed to known violating content in the past 30 days? And what % of those are known to be under the age of 18?

  • How many users have more than 10 exposures to known violating content in the past 30 days? What % of those are known to be under the age of 18?

  • What percentage of views on known violating content in the past 30 days occurred because your platform recommended the content to the user, either through suggested content in a feed, a feed of recommended content, or content shown as the result of a search, or other algorithmic system for ranking or recommending content?

  • What percentage of views on known violating content in the past 30 days occurred because the user followed the account that posted the violating content after you recommended the account to them?

  • What percentage of the views on known violating content in the past 30 days occurred in direct messages?

    • For Meta: How does that percentage compare to the percentage which occurred in direct messages in November of 2023?

These questions will give us a clear sense of how many people are being harmed by the platform and what fraction of it can be very directly attributed to the design choices and systems the company has made.

All Questions

Count of Views & Exposure

As a more comprehensive alternative to the above questions, companies could complete this spreadsheet.

  • All view counts are requested monthly- since Jan 2023, broken down by age group (under 13, 13-18, 18+), language and region.

Taxonomy, Definition, and Identification of Harms

  • Do you have a taxonomy of harms that can occur to your users as a result of being exposed to content that violates your policies against CSAM, CSAM solicitation, self injury, pro eating disorder, and bullying and harassment?

    • If so, what are your external descriptions and internal definitions/ identifying markers of possible harms to your users?

    • How do you define the severity level of each harm?

    • What priority level do you give to each harm type based on severity level and other risk factors (please list those other risk factors)?

  • What research have you done about vulnerable populations and how they experience harm? 

    • Do you have special policies or carve outs in harm taxonomies and severity of harm for vulnerable populations?

  • Do you have a definition for borderline or sensitive content that might not violate your content policies but are inappropriate to be viewed by children?

    • If so, how many views did borderline or sensitive content get from users under the age of 18 in the past month?

    • How many of those views were a result of content that was recommended to the user?

  • What research or studies do you do to understand the impact your products have had on young people?

Theme 2: Are Companies Investing in Safety Appropriately and Effectively?

Goal: There is currently a lack of understanding in the public as to whether the companies are investing in safety at a level that is appropriate given the scale of harms and in a way that is actually mitigating the harms. The companies report very little data that would allow the public to assess this. And it cannot be tracked solely by how many employees the company has or how much money they spend. These are important, but we need to get more data out of the companies about the impact that their safety efforts have on minimizing harm to users.

Priority Questions

For each of the policy violation types CSAM, CSAM Solicitation, Suicide and Self Injury, Pro-Eating Disorders, and Bullying and Harassment, please report the following data for the past 30 days:

  • How many reports did you receive from users that content was potentially violating?

  • What percentage of reports are on content that is at some point evaluated by a human content moderator?

    • What is that same percentage but for reports coming from a user that is known to be under the age of 18?

  • What percentage of reports are “auto-dismissed”, meaning an algorithmic system decided that the content didn’t violate and no human was brought in to evaluate, or “auto-closed”, meaning the report was never evaluated due to lack of capacity of moderators?

    • What is that same percentage but for reports coming from a user that is known to be under the age of 18?

    • What percentage of these “auto-dismissed” and “auto-closed” reports turn out to have been dismissed incorrectly and the content does violate your policies?

    • What percent of reports that are initially identified as benign turn out to be violating, broken down between human review and automated decision?

  • What is the average time delay between when a piece of content is first reported as potentially violating by a user and when it is evaluated by a human content moderator?

    • What is that same average time delay when the content is reported by a user that is known to be under the age of 18?

  • How many views does content get on average between when it is first reported as potentially violating by a user and when it is evaluated by a human content moderator?

    • What is that same view count when the content is reported by a user that is known to be under the age of 18?

  • What prioritization factors and framework does your company use to determine investment into trust & safety vs. other priorities for the company?

    • How often are these factors and frameworks updated and amended?

    • Where does child safety land compared with other company investments?

    • Can you provide the prioritization models from 2023 to present day for your company?

  • How much investment do you put into child safety for English-speaking users in the US vs. all other languages and non-US countries?

All Questions

  • Provide the breakdown in monthly spend, ideally over the past two years (to cover the trend in industry layoffs), on trust & safety broken down by harm type, region and language, of: 

    • Spend on staffing (FTE vs. Contractor) 

    • Spend on other investments (Infrastructure, tech-ML/AI, etc)

  • If your investment in safety has increased or decreased in the past year, what has been the impact in that change of investment?

    • Do you believe it has resulted in an increased or decreased prevalence, views, and reach of violating content?

    • Do you believe it has resulted in an increase or decrease of child safety reports?

    • How do you measure the success of the change of investment?

  • Do you conduct safety reviews of your product and feature launches? Can safety reviews prevent features from being launched? How does safety review compare to privacy reviews?

  • How has your investment in safety at a global scale changed over the past two years?

    • Do you have teams that engage with local experts in harms the platform can cause around the world? (Frequently called global affairs, public policy, or partnership teams)

    • How has the staffing level of those teams changed over the past two years?

Theme 3: Are the Platforms Responsibly Designed?

Goal: The design choices that companies make when building their platforms can have a huge impact on how much harm occurs on the platform. Irresponsible design choices can greatly amplify the harm that users experience and negate any investment into the safety of the platform. If a company is not making responsible decisions, it is possible for the company to end up prioritizing growth and engagement over the safety of the platform. 

Priority Questions

  • How is integrity and trust & safety integrated into product, growth, safety teams?

  • How do you ensure that child safety is integrated into every new product, platform change, or new feature? And how do you ensure child safety is integrated into existing products?

    • Do you do safety audits of all of your product launches? If so, what do you test for? What does the audit report look like? How often is this done?

    • How much opportunity is there for safety teams to block new platform features which can play a significant role in harms?

  • Who possesses veto power over whether a change or product update launches due to a safety concern? 

  • In 2018, Mark Zuckerberg stated that content that is more likely to be policy violating, and thus harmful, will tend to get more engagement

    • Do you use machine learning systems to predict if users will engage (like, reply to, share, look at, etc) with content?

    • If yes, how does policy violating content perform (specifically for violation types CSAM, CSAM Solicitation, Suicide and Self Injury, Pro-Eating Disorders, and Bullying and Harassment)? For policy violating content that is not automatically removed from your platform, are the scores of engagement predictions systematically higher than for overall content?

    • How do you ensure that your machine learning systems that predict if users will engage with content are not amplifying policy violating or harmful content?

  • What percentage of accounts under the age of 18 have parental controls and age-appropriate features enabled? What are your metrics for success for these features? How do you know parental controls are working and effective against the various types of harms children can experience on your platform?

All Questions

Product & Platform Safety Alignment

  • How do the automated content classification systems work on non-English content? What are the confidence rates compared to English content?

  • What is your process for deciding which features to enable for teen users, especially high risk features? 

    • For Meta: Why does Instagram allow DMing between teens and adult users? 

    • For TikTok: How did you decide to allow DMs for teens 16 and over as opposed to 18+?

    • For Snap: After TikTok limited DMs for teens under the age of 16, how did you decide that you would not follow suit?

    • For Discord: How did you come to the decision that 200 person group chats should be considered private?

  • Primarily for Meta, TikTok, X: How do the platform’s recommendation systems work? Why is it seemingly so easy to cause your content recommendation systems to show users, including children, content that violates your policies and is harmful? Why does engaging with a few pieces of self injury content or CSAM solicitation content lead to numerous recommendations of similar content?

Theme 4: Are Platforms Empowering Bad Actors?

Goal: Platforms provide some opportunities for bad actors who are seeking to take advantage of people, including children, for financial gain or other motivations. Similar to design choices, there are also behavioral patterns that companies can design against and security measures companies can use to keep the platform safe. Bad actors will always be present on platforms and always testing the safety of the platform, so companies need to be held accountable when they are empowering bad actors, rather than removing and disincentivizing them.

Priority Questions

For each of the policy violation types CSAM, CSAM Solicitation, Suicide and Self Injury, Pro-Eating Disorders, and Bullying and Harassment, please report the following data for the past 30 days:

  • What percentage of views on known violating content were on content posted by accounts that violated your policies against fake and inauthentic accounts?

  • What percentage of views on known violating content were on content posted by accounts that have previously been found posting violating content or posted by a user who has previously been found posting violating content on an alternate account?

  • What percentage of views on known violating content occur in private and/or encrypted spaces (i.e. messenger apps, DMs, closed groups)? 

    • For Meta: How many harms were detected, predicted, and reported before encryption was implemented? How did those numbers change after encryption was implemented at the end of last year?

All Questions

  • What do the platforms do to ensure child safety proactively and before exposure to harm or child grooming actually occurs?

  • For Meta: Why do you allow E2EE for teens when you have unsafe recommendation surfaces? Can you commit to either cleaning up your recommendations or removing E2EE for teens?

  • What does your company do to prevent fake and inauthentic accounts from messaging teens?

  • Do your companies notice and track how spam and scaled fraud intersects with child safety?

    • How do you address the overlap between child safety, child exploitation and other types of abuse such as scaled or targeted fraud?

  • Are you confident that your reporting flow is easily understood by children? What data or research gives you that confidence?

Theme 5: Relationship Between Safety & Growth

Goal: It is a very common experience of integrity and trust & safety workers to see that the growth of a platform and total engagement on the platform are in tension with safety. It is common to see that platform updates and new features that increase engagement also hurt the safety of the platform. In some sense, this is a problem of the company not understanding the tradeoffs between short term growth of the platform, which is frequently in tension with safety, and long term growth, which can be more aligned with safety. It is essential that platforms be held accountable when making decisions that prioritize their short term business interests at the expense of safety.

Priority Questions

  • How does your company use metrics and data to make decisions about what platform changes or new features to launch and make part of the default user experience?

    • What are the primary A/B testing or experimentation metrics your company uses to measure the safety of the platform?

    • What are the primary A/B testing or experimentation metrics your company uses to measure the growth of and engagement on your platform?

    • How often does your company see conflict between how you measure growth and engagement and how you measure the safety of the platform when evaluating new features and platform changes? Meaning, how often do you see new features or platform changes that increase growth and engagement but also decrease safety?

    • How does your company decide when to launch such features and updates that increase engagement or growth but decrease safety? What frameworks does your company use to navigate those decisions? What teams, organizations, and leaders are involved in those decisions?

  • What is your approach to ensuring the growth and engagement on your platform is safe for users? How do you measure safe growth and safe engagement?

All Questions

  • What factors, definitions, or measures are used by your companies to qualify content as safe, authentic, and healthy for children?

  • How do you define or determine the quality and quantity of time spent on the platform as healthy for a child?

Previous
Previous

Why Is Instagram Search More Harmful Than Google Search?

Next
Next

Exploring the Depths of Online Safety: A Critical Analysis from the US Senate Judiciary Hearing