Exploring the Depths of Online Safety: A Critical Analysis from the US Senate Judiciary Hearing

On January 31, 2024, the US Senate Judiciary Committee held a hearing on child safety online. Prior to the hearing, Integrity Institute released a list of recommendations on how platforms could better protect child safety online. Following the hearing, Institute staff share their takeaways in this post mortem.


Following a series of reports from the Wall St. Journal about how easy it is for pedophiles networks to operate on Instagram and how readily Meta’s recommendation systems promote CSAM solicitation content, yesterday, the Senate Judiciary committee held a hearing with five CEOs of social media platforms. The hearing was productive overall, even if, in the limited time available, it didn’t get to every topic that integrity professionals would like to see discussed between policy makers and company leaders. There were a few key themes of the discussion to dive into deeper:

  1. Several Senators asked for concrete numbers about how much harm is occurring on the platforms. This is great. Unfortunately, we didn’t get meaningful answers from the leaders.

    1. We need meaningful and concrete numbers on the reach and impact of harmful content.

  2. Several Senators asked about how many employees the companies had working on safety and how much money they spent on safety. But this discussion was off target to determine if the safety measures companies are using are effective and appropriately resourced.

    1. We need numbers on the effectiveness of safety measures and content moderation, for integrity and trust & safety teams to have representation in company leadership, and for the integrity and safety mindset to be distributed across the company organization

  3. There was a lot of discussion about parental controls, but the headline there is that of the 20M teens using Snapchat, only 2% have parental controls enabled. It is not a comprehensive solution to keep children safe online. A few Senators raised the issue of the platform's design, which is encouraging. We continue to get closer to Senators asking meaningful questions about the design choices companies make, especially the known irresponsible ones, while building their platforms.

    1. Parental controls are not a sufficient solution to protect children online, and we need companies to remove unsafe and irresponsible design patterns from their platforms.

  4. The discussion touched on several ways in which the platforms enable harmful behaviors by bad actors. But unfortunately, how the platforms empower bad actors, and what steps could be taken to change that wasn’t a real discussion topic.

Theme 1: How Much Harm Do Children Experience Online? (Transparency)

Asking for more data and numbers from the platforms was a common line in the Senators questioning. Which is great! More transparency please! Sen. Cruz asked how many times people were shown the child sexual adult material (CSAM) Support Screen on Instagram, and how many people clicked through to see the results. Sen. Hirono asked if Meta would report how many teens experience unwanted sexual advances. Sen. Coons in particular asked each CEO if their company reported how many views self injury content got, and each platform said no (Snap and Discord said yes, but it seems there was confusion about what Sen. Coons was asking at that point, because they don’t). And Sen. Coons also highlighted the need to understand how the algorithms work and said there would be more in the record. These are all great questions. And sadly, but perhaps unsurprisingly, we didn’t get any meaningful responses from the CEOs concerning these questions about the scale of harms occurring on the platforms.

Measure Impact vs. Effort

The CEOs were happy to talk about the numbers in the various reports they already release, but the transparency reports the leaders cited don’t answer the Senators' questions. For example, Meta has a Community Standards Enforcement Report and TikTok has the Community Guidelines Enforcement Report, and they were both happy to talk about how many enforcement actions were taken, how many were proactive, and, in Meta’s case, about the prevalence of violating content. The problem is the numbers that the companies offer really only track the level of effort being put into their content moderation process, not the impact. This doesn’t tell us if the effort they are putting in is a responsible amount relative to the true amount of harm occurring or if the effort is effective in mitigating the harm.

The numbers that we currently get from companies are insufficient. They fail to give society, including policy makers, a sense of how much harm is occurring on the platforms, why those harms are occurring, and the nature of the harms. It was encouraging to see some Senators ask questions in that direction. But we would have liked to see more questions about how many people are exposed to harmful content, such as pro eating disorder, pro self injury, CSAM, or harassment. In addition to how much, we need to see why those exposures are happening. What fraction are from the platform's recommendation systems? What fraction are from accounts that the platform recommended to the exposed users? How are the exposures distributed among the users? Do a small subset of users experience high levels of harm? Or do all users experience a small level of harm? Sen. Coons got at this need with discussions about how many other industries have safety labels that contain lots of information.

Mark Zuckerberg said that Meta was industry leading in transparency, and they have a decent case there. But that doesn’t mean that society gets enough information from the platforms to assess their safety. It is primarily a reflection of how little transparency we get from the industry. This chart gives an example of the most basic data that would be used just to begin making that assessment.

We have a long way to go before we get basic transparency, let along meaningful, comprehensive, and verifiable transparency. And it should be noted that all of this data is almost certainly actively measured internally, or at least it is straightforward to do internally.

The Senators also asked many questions about exactly how much money the companies are spending on making their platforms safer for kids, and what fraction that spend is of their total revenue or profit (Apparently $2B out of $85B for TikTok, and $5B out of $115B for Meta). Given how upset the public is with the platforms right now, and justifiably so, it isn’t surprising that we want to see the companies spending significant fractions of their revenue on safety. And to be clear, building a safe platform with sufficient resources to remove harmful content before it hurts people is expensive. But tracking how much money companies are spending on safety is again tracking how much effort they are putting into it without tracking how effective their efforts are, and whether they are making progress in reducing harm to their users.

To measure the effectiveness of content moderation systems, we would need to see transparency into what fraction of user reports actually lead to a moderation decision, broken down by automated decisions and human review; how many views violating content gets before being moderated; and what is the time lag between reports and reviews. These questions will become especially dramatic when broken down by harm. How many reports of harassment experienced by teens never get reviewed by a human? Because the number likely isn’t 0 for any platform. And the speed of the review is meaningless if the reports are auto-dismissed.

And of course, it is essential to understand the design of the platform, and the extent to which it is amplifying harms, when trying to understand if the safety systems are enough. If the platform is designed irresponsibly, if it is amplifying and recommending harmful content to huge numbers of people, then it doesn’t matter how much money the company spends on content moderation or trust and safety teams; it won’t be enough.

Theme 2: Are Companies Investing in Safety Appropriately and Effectively? (Staffing, Org Structure, and Governance)

Over the past year and a half, we have seen numerous large layoffs across most social media companies. The impact this has had on trust and safety teams has ranged from catastrophic, such as what happened at Twitter, to “definitely not good” at the other companies that have taken more “20% cuts everywhere” approaches. Several Senators raised issues about staffing and employment, which is good. Staffing levels are important, if also insufficient. Sen. Welch directly asked how the companies could be claiming to improve child safety while they are laying off employees working on it. Subjecting integrity, trust and safety, and wellbeing teams to cuts given the harms occurring on the platforms definitely feels irresponsible. 

Meta and TikTok touted that they had tens of thousands of people working on platform safety; the other leaders reported they had thousands of people tasked to work on safety. But numbers here can be confusing. Platforms could fire full time employees working on safety and hire more outside contractors evaluating content, or they could be firing contractors and hiring more full time employees, or they could be counting teams that only spend a small fraction of their time on safety. A company could launch a new, dangerous feature that greatly increases exposures to harmful content and then hire more content moderators to attempt to clean up the mess. In that situation, the company hires many new trust and safety employees, but the overall platform safety more likely gets worse. It’s simply impossible to say “Oh, this company has 10,000 people working on safety, so of course that is an appropriate response to the harms occurring on the platform”.

Explain How Investment Choices Effectively Translate to Impact

The total number of employees also doesn’t say how much influence the employees who care about safety have within the company. If you have 10,000 full time employees working on integrity, but you ignore all of their findings and recommendations, then that’s still a bad situation. So when Linda Yaccarino says they’ve increased Trust and Safety Staff over the past 14 months, it’s important to note that, very conveniently, the first round of X’s layoffs, where they laid off nearly everyone working primarily on trust and safety, were 15 months ago, and that increasing headcount doesn’t mean that safety teams have more influence on company decisions. And we’ve seen the contrary. Senators brought up internal emails from Nick Clegg and Adam Mosseri, leaders within the company who don’t focus primarily on safety, asking for more employees to work on safety, and being turned down in favor of “other priorities”. Clegg asked for only 45 more people, which was noted by the Senators. If leaders in the company are unable to get resources for safety, what are the odds lower level integrity and trust and safety teams are getting approvals to their staffing requests?

After Sen. Hawley gave a statement about the internal studies at Instagram about the harms the platform causes teens, and those studies are still very much worth discussing with the company leaders, he asked the question “Who did you fire?” to Mark Zuckerberg. And again, anger is understandable, and we desperately need more accountability. But the real question is “How many people did you hire to actually solve these problems?”

Where would we have liked to see the discussion go? At a high level, we need to understand what have been the measurable changes in investments over time into Trust & Safety. This should include employee count, use of human moderation vs. AI (broken down by role and harm area, language, country), if the company’s effort is meeting the scale of harms, and translating into effective mitigation of harm.

We would have liked a comprehensive discussion into the impact that the industry downturn has had on integrity and trust and safety teams. How have full time employee levels changed in the past year and a half, since before the industry downturn? How have part-time and contract content moderator levels changed in the past year and a half?

We also need to see how much influence those employees have. How many changes to the platform have been blocked based on safety concerns? How are integrity and trust and safety teams, as well as thinking, integrated into product teams? How does the company measure the success of the platform? How does the company measure the safety of the platform? How often does the company see conflict between how they measure success and the safety of the platform and how do they resolve those?

Meta launched end to end encryption by default on their messaging services at the end of last year, and there’s reporting that there was a lot of internal debate about how responsible that was given this would impair detection of child grooming and sale of illegal goods. How much opportunity is there for safety teams to block new platform features which can play a significant role in harms? And let’s not forget that everything gets worse when you go international; how many people does the platform have working on content moderation and policies in non-English languages? And how do the automated content classification systems work on non-English content?

Theme 3: Are the Platforms Responsibly Designed? (Platform Design, Recommendation, and Parental Controls)

It was great to see Senators express concerns about the platform designs of these five major platforms, mentioned as early as Sen. Durbin’s and Sen. Graham's opening statements! Sen. Kennedy even brought up issues with the addictive and harmful nature of recommender systems ‘pushing hot buttons’ in individuals over and over. It is encouraging to see Senators seemingly informed about the complexities around platform designs and recommender systems. It seems that the understanding of how platform design leads to harm has increased dramatically over the past year. Forcing the company leaders to defend designs that are risky is key for accountability.

Parental Controls Do Not Sufficiently Mitigate Harm

There was also a lot of discussion about parental controls. Every (or nearly every) platform there expressed support for parental controls. And on this front, we actually got new and valuable data from Snap. Of the 20 million teen users of Snapchat, only 400 thousand, or 2% of them, are linked to a parent for the use of parental controls. We’ve heard a similar story in reporting the Washington Post saying that less than 10% of teens on Instagram have parental controls enabled. This is actually a valuable discussion line. This is a key example of how regulation that mandates that the companies implement a particular feature (e.g., parental controls) will be less effective than regulation which creates accountability and incentives for responsible design and proper resourcing of safety teams as a whole.

Pushing the CEO’s on what percentage of parents opt into parental controls is a great metric for tracking the impact of parental controls. Do the companies actually think that parental controls are the most important way to keep teens safe, when less than 10% of teens actually use it? Mark Zuckerberg said that “it was what parents wanted”. But what if what parents know to ask for isn’t sufficient  to keep platforms safe for their kids? Many parents don’t understand the full suite of parental controls that already exist, as Sen. Klobuchar pointed out, and the average parent isn’t an expert in responsible platform design.

Measure How Product Design & Features Contribute to Harm

It’s important to state that we know there are platform design patterns that are more irresponsible and harmful than others. And on the discussion of child exploitation online, there are several design patterns that lead to CSAM solicitation materials being amplified by recommendation systems, such as engagement based ranking, and direct messaging functions that empower predators. We can compare, for example, the different direct messaging capabilities offered by YouTube, TikTok, and Meta, and see that each of the platforms made different choices. These different choices will play a role in the harms that occur on the platforms.

Meta is the only company that is allowing teens to have end to end encrypted conversations with adults they may not know in real life. Does Mark Zuckerberg have a good explanation for this choice Meta has made? What impact has that change made on detection of bullying and harassment of teens? When making product design decisions, teams could measure not only growth and adoption, but also any changes to enabling known and potential harms. 

Platforms can have features that empower bad actors. And real safety comes from understanding how systems can be abused and removing bad actor tactics to exploit the platform. Multiple Senators brought up a tragic case of sextortion, run out of Nigeria, that ended with the death of an American teen. Cases like these really merit an evaluation of the platform. How is it possible that actors in Nigeria successfully targeted and connected to a teen in America? How many accounts do these Nigerian actors have? How many times had they run similar operations on other teens? Why does Instagram even allow direct messages for teens, when YouTube completely eliminated them, and TikTok turned them off for young teens? How did TikTok land on 16 as the magic age where a person is ready to have private conversations with any other human on earth?

We would have liked to see even more discussions about how the platforms design choices interplay with harms. Why is it that CSAM and CSAM solicitation material perform so well on Instagram? How do the platform’s recommendation systems work and how does CSAM, self injury, and eating disorder content perform in it? How many exposures to harms come from repeat offenders? How many harms occur in private spaces? What do the platforms do to tackle the problem more proactively and before the harms actually occur?

Theme 4: Platforms Should Not Empower Harmful Behaviors and Bad Actors More than Normal Users

The sad truth is that a small number of bad actors can be responsible for a huge percentage of the harms. When you see bad actors in Nigeria successfully running scams and operations, you know it’s because the platform is “leaving doors unlocked”, and bad actors have found easy exploits. For example: Did Instagram know the account was run out of Nigeria? Instagram has IP address and phone device signals to locate people at the country level. If the bad actors were spoofing their location, that is almost even worse. Is Instagram letting accounts that have signs of location spoofing message American teens? Or worst of all, is Instagram actually being truly fooled by some spoofing exploit? How many of these cases would be solved if Instagram added friction steps, that could include closing all possible methods bad actors have of mass creating fake accounts, before direct messaging was enabled? We sadly don’t know if Instagram has some basic exploit that is exposing numerous users to easily avoidable harms. But hearings like these could be opportunities to dig into those issues.

Sen. Cruz also dug into an odd design choice made by Instagram that was uncovered by a Wall St. Journal and Stanford Internet Observatory investigation. Instagram has “support screens” that they show in response to searches that involve sensitive words. This is primarily done to ensure that people in critical situations are shown resources that could help them. They were, most likely, initially created to show people who search for terms related to self injury or eating disorders, and act as an interstitial to connect users in those situations to help, but the option existed on the screens to “see results anyway”. Sen. Cruz highlighted what the Wall St. Journal and SIO found: that these screens also exist for CSAM related search queries, and users were presented with the option to click through the screens to “see results anyway”, even though those results had, as determined by Instagram, a likelihood of containing CSAM.

Sen. Cruz focused on how many times Instagram had shown that screen and how many times users had clicked through to “see results anyway”. Which are good questions to ask. But we missed out on a discussion of how Instagram landed on that being a sensible choice. We missed out on a discussion of whether Instagram used suspicious behavior like this as signals to potentially protect children from possibly dangerous accounts. We missed out on how it was decided that a screen for CSAM was appropriate, rather than a more comprehensive evaluation of content with those terms in them. There is a large quantity of “borderline” content on all platforms. How the platforms handle borderline content, and the users that post and interact with borderline content, are important issues to investigate.

Previous
Previous

Questions for Platforms on Child Safety for Congressional Record

Next
Next

The Art of the Block