Child Safety Online

Overview

Concerns about social media’s impact on children are well-documented and well-founded, as research suggests that some features of social media function can harm some young people’s mental health. The harms we see impacting children are not markedly different from the harms impacting adults: concerns about privacy, exposure to harmful content (both legal and illegal), bullying, harassment, unwanted contact from strangers (including people seeking to sexually exploit minors), addiction and problematic overuse, and the spread of non-consensual intimate imagery (which, in the context of children, is known as CSAM, a harm so egregious that it is highly regulated on its own). Overall, these harms are especially concerning when it comes to children, and platforms have a responsibility to implement additional safeguards to mitigate these harms and provide transparency around how they are doing so. 

What will help make online platforms safer for kids? 

#1 Empowered Integrity Workers

There are people at the platforms working daily to make them safer for children (and all users): integrity workers. These workers know the platforms, and it is their job to understand how harms manifest and explore how they can be mitigated. Instead of micromanaging platforms with lists of required features or changes they must make (which will not apply to every platform and will not help protect against future, as yet unidentified harms), policymakers should focus on measures that empower integrity workers to do their jobs better and have more influence within the companies as the most effective lever for making platforms safer in the long term. This is difficult to translate into direct policy, but within the companies, many employees are advocating for more responsible design choices (see for example, features.integrityinstitute.org). These integrity workers are not as successful as they should be because there isn't a comprehensive set of incentives outside of the companies to encourage them to build more responsibly and take risks more seriously. Policy that creates those external incentives will have more impact than direct mandates of specific platform design practices. Such policy could include increased meaningful transparency requirements for platforms, mandated risk assessments and audits, access for researchers, and stronger whistleblower protections. 

#2 Transparency about the scale, nature, and causes of harms on the platform

Platform transparency is key to understanding the harms perpetuated by social media use for young people so that proper mitigations can be developed. Transparency also provides an avenue to hold platforms accountable for their efforts to combat the spread of harmful and illegal content. Platforms need to provide information on the scale, cause, and nature of harms on their services, as well as information on their internal processes and investments in measures to mitigate risk to children:

  • Scale: How much harmful content is there? How many impressions are there per month on policy violating content? How many people (and specifically minors or accounts predicted to be minors) are exposed to known policy violating content per month? What percent of all content impressions or views are on policy violating content, coming from a full prevalence estimate?

  • Cause: Why are users being exposed to violating content? E.g., what are the contributing vectors to the risk of a user seeing violating content? To what extent are platform design and ranking algorithms playing a role? This will include releasing information about platform algorithmic systems (what are the most important features used to rank content, what information about users is used, what are the topline metrics used to evaluate algorithmic systems?) What fraction of impressions on violating content come from recommended content, recommended accounts, or accounts that a user has chosen to follow? How does risky (policy violating and borderline) content perform in their algorithmic systems? (More information on this can be found in our “Ranking and Design Transparency” briefing.)

  • Nature: Who is seeing the harmful content? What is the frequency distribution of impressions on violating content, e.g., how many users are exposed to how many pieces of violating content per month? Are the exposures evenly distributed among all users, or is there a small subset of users who are seeing a lot of violating content? If the latter, what are the possible factors as to why they are seeing more than others? This should include demographic data such as what percentage of known violating exposures are in each age range or each geographic region (state level, etc.).

  • Datasets: Providing datasets (including definitions and methodology for the dataset) is crucial to allow external verification of claims made by the platforms. Platforms should release a dataset of at least the top 10,000 pieces of public content (based on impressions or views) on the platform per week for the reporting time period; and a dataset containing a random sample of impressions on public content for a specified time period, which contains at least 10,000 samples.  

  • Internal risk assessment and mitigation processes: What are platforms doing to understand and mitigate harm to children on their services? Platforms should release risk assessments conducted on their sites that include risks to children and plans for mitigating these risks. This should include transparency about levels of investment made to address risks to children, including into the specific mechanisms towards mitigating harms and the support/ amount of investment made and priority placed on child harms (stacked against all other harms or stacked against other corporate investments).

  • Researcher Access. In addition to public transparency through the above means, providing access to external, independent, and expert researchers is important to ensure that the platform is having a positive impact on the societies where it is used, to understand gaps in policies, biases in systems, and protect against risks and harms that may be prevalent in regions in the world where platforms don’t have any internal expertise or knowledge.

#3 Limiting engagement-based ranking

What is engagement-based ranking? Systems in which platforms track everything users engage with, then use that to predict what users will engage with in the future, and use that prediction to put content predicted to elicit engagement reactions to the top of users’ feeds. 

Problem: Engagement-based ranking will naturally lead to the amplification of harmful content. This has been acknowledged by major platforms and has been confirmed by research at major platforms. 

Mark Zuckerberg: “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average  -- even when they tell us afterwards they don't like the content.”

Problem: This maximizes any addictive nature of the platform because as the system learns more about what will engage a user, it gets better at presenting content that keeps the user engaged and on the platform.

Limiting engagement-based ranking can help get at the core problems of why harmful content is so readily available on platforms because so long as engagement is driving platform ranking systems, no "safety" or "integrity" system patched onto engagement-based ranking will ever solve that core problem.

#4 Limiting recommended content

Recommending content and accounts to users poses some risks, especially when the content and accounts are recommended to get users to take engagement actions (because as indicated above, things more likely to get engagement are closer to the line of violating/harmful content). 

Proactive content recommendation systems present distinct challenges for the well-being and development of children. For one, they subvert the traditional user-tool relationship of the internet: historically, users approached websites with a specific intent or interest, and the site aided them in furthering their stated aims. However, in a recommendation system, the platform assumes full control over how a user's attention is directed. The economic incentive for platforms to maximize attention leads them to implement strategies from casino design to direct and retain user attention: instant gratification, variable reward schedules, and parasocial engagement, each of which helps to form habits of use. Their recommendation systems direct user attention subjectively and flexibly, typically without control or transparency for users. Furthermore, as the internet is increasingly a venue for children to form their identities, the risk is that the prevalence of attention-maximizing content recommendations can homogenize and distort young users' sense of the world and themselves, facilitating social comparison at an alarming scale and potentially stunting their development of independent will and self-directed preferences.

  • Children should, by default, have a very safe experience when it comes to recommended content and - ideally - not be presented with content they have not elected to see. 

  • Parents and kids should also be able to set hard time limits for their use of the platform or app.

  • Platforms should allow users to be able to reset their recommendation profile.

#5 Privacy settings by default

Privacy settings offered by platforms can help mitigate harms stemming from contact with strangers and from general surveillance and data collection on minors (to which they cannot meaningfully consent). If a platform offers control over a feature expressed as a control to the user, and the feature implicates user privacy, then the default for a child should be the setting that maximally preserves privacy. This could mean: 

  • Account and content are private by default

  • No messages from accounts the child doesn’t follow

  • No comments from accounts the child doesn’t follow

  • Limited recommending of the account

  • Limited use of data collected about the minors

  • Establishing a process for minors (e.g., children of content creators) to request removal of content that features them, once they turn 18 

#6 Limiting notifications

Minors should receive a very limited set of notifications from platforms. Notification settings for minor accounts should be default set to strictly notifications that are essential to core functionality of the platform, such as when they receive messages or engagements on their content. Minors could be given control over additional aspects of notifications, such as the option of silencing notifications during certain times of the day. In addition, there are “dark patterns” in notifications that could be restricted, such as notifications that are emotionally manipulative or misleading.

#7 Parental controls

Parents should be given controls to ensure that their children are having positive experiences on platforms while respecting children’s privacy. This could be enabled by adding controls gated by a PIN code or an additional password for the parent. At a high level, it could be beneficial if parents were able to control: 

  • Privacy settings of the account

    • Limiting who can directly message the child

    • Limiting the visibility of content the child posts and of the child’s account

    • Limiting data collection and use of data collected on the child

  • Setting time windows of when the app can be used

  • Restricting recommended content and accounts

  • Restricting the platform’s ability to send notifications to their child

  • Parents should get reports on their child’s usage

    • Reports on total time spent in the app

    • Reports on any violating content the child has been exposed to (this does not need to include the content, but should include policy the content violated).

We do not recommend more stringent parental controls or identify verification schemes because there are other risks related to building out more surveillance capacity within a platform that do not proportionally increase the safety of children. It is also true that not all parents will use these features, some children will game them (get access to the pin code, for example), and abusive partners (or other bad actors) may exploit such controls. However, they are a useful tool as a line of defense and should be available for parents.

#8 Limiting targeted advertising

The ability to target minors with advertising should be constrained to a very limited set of parameters, and platforms should not collect any data on minor accounts beyond these parameters:

  • Ability to target by age bucket and non-precise location can be allowed.

  • Ability to target based on data from outside the platform should be limited.

  • Ability to target based on predicted interests should be banned.

  • Ability to target based on “look alike audiences” should be banned.

Methods that are less effective for making online platforms safer for kids 

It is tempting, in the interest of protecting children, to insist on full visibility into what is happening on the platform (e.g., by banning the use of encryption and proactively scanning all content), to require rigorous age verification mechanisms, or to mandate that platforms adopt content policies related to specific harms. The truth is however, these things are not only questionable in their effectiveness and may be more harmful in their secondary effects, but they also leave the core causes of the harms children face on social media untouched (that is, engagement-based ranking and risky design practices detailed above). Rigorous age and identity verification infringe on privacy and put marginalized people at risk. Blanket bans on end-to-end encryption erode privacy rights and encode surveillance that can be weaponized beyond any initially limited scope. And mandating that platforms develop overly prescriptive policies against specific harms like CSAM, self-injury, eating disorder, and bullying content is unnecessary: platforms have plenty of incentives to develop policies against these types of content already. Instead, we need to make sure they are operating in good faith, doing what they say they are doing, and that their systems are not amplifying harmful content or patterns. 

Demanding that platforms pour money and time into intensive measures that ultimately increase surveillance and undermine privacy (with limited effectiveness) takes resources away from the teams within the platforms that can understand the ways bad actors are abusing the platform and tailor responses to each platform’s shape and design.

How Can Platforms Differentiate Between Users?

All of this naturally begs the question, how do we expect platforms to implement any child-specific measures if we don’t mandate that platforms verify users’ ages? Platforms have options outside of intrusive age (and identity) verification schemes. Namely, they can start by asking users to self-identify how old they are. This is easily gamed, but some people will tell the truth, and some people’s parents will be setting up their accounts. Platforms can also use classifiers to guess a user’s age (most do this already) and group them into a “minor” category. There is also a model where a device or operating system can be set to child mode at the time of set up, which communicates to all applications that “this is a child’s phone, operate in ‘child mode,’” without any additional steps needed. Finally, and perhaps most effectively, platforms can default all accounts to “child” status. This not only has a positive secondary effect of extending the privacy and safety priority we give to children to all users, but it also puts the onus on adult users to opt in to features deemed to be risky for children or to unlock fewer protections, rather than making the “safe” version of the platform something people have to search for in the settings.

 

 

The Integrity Institute is a robust assembly of over 350 integrity experts, including exceptionally high-ranking professionals from major platforms such as Meta, Google, TikTok, and X. This community of experts powers the Institute’s critical-thinking research wing. Our members are seasoned leaders with extensive experience in trust and safety teams across these platforms, bringing a deep understanding of the complexities surrounding the challenge of mitigating harm to children online. Non-partisan and independent, the Institute does not endorse legislation, candidates or political parties but is committed to shaping the future of tech policy in the U.S. We believe that strong and effective public policy can create the necessary accountability to encourage social media companies to design their platforms more responsibly. Our work in crafting legislation aims to protect U.S. citizens in the evolving technological landscape. Through our global collaborations with governments and advocacy groups, the Institute offers unique perspectives on platform transparency, privacy, election integrity, and child safety, ensuring that our recommendations in platform integrity issues are not only informed but also forward-thinking. 

Previous
Previous

Global Elections Playbook: AI Edition

Next
Next

Pixels and Protocols: A Journey from Gaming Nostalgia to Digital Responsibility