Integrity Case Study: Protecting Kids From Abuse

By Abhi Chaudhuri, Dominique Wimmer, Jenna Dietz, Matt Motyl and David Jay

How can we help the kids who are actively being harmed?

This question was getting increasingly asked among Google’s Trust and Safety Team responsible for addressing Child Sexual Abuse Material, or CSAM. Like almost any service that deals with images or videos online, Google had to address their services being used to store and circulate media depicting horrific acts of child abuse. This media is both extremely harmful and highly illegal, making it necessary to identify and take down quickly. Every day images were being flagged and reported to the National Center for Missing and Exploited Children (NCMEC), but it didn’t feel like enough.

Most of the images being identified had been circulating for years. Because they were known, they were easy to identify and flag automatically. But the team knew that there were far too many kids still actively being abused, kids who needed help. To help kids who are actively being harmed, they would need a way to detect newly generated images, which posed challenges. CSAM couldn’t legally be stored by anyone including the team trying to combat it, which made it hard to train an AI system to detect new forms of content. Focusing on saving kids would pull resources from other kinds of CSAM detection, and even then, more resources would be needed.

A plan began to form, and new tactics began to emerge. The team put together a slide with the number of kids rescued annually because of their reporting and started giving talks around the company. Who was interested in helping that number go up? They began to get buy-in from senior leadership, engineers and data scientists from unrelated teams began to volunteer their time. Because of their work, the number of law enforcement actions per year actively removing children from high risk situations increased by a factor of 12.

The scale of the problem is vast. One in eight children are subject to online sexual exploitation and abuse each year. These crimes range from adult-initiated unwanted sexual talk to the nonconsensual taking, sharing, and exposure of sexual images and videos. Technology has facilitated this growing global crisis by helping abusers find their victims, find other abusers who may want to buy their CSAM, use video-generating artificial intelligence models to create “deepfake” sexual images and videos of children, and evade law enforcement

Rescuing kids means physically rescuing them from abusive situations, but also from deeply scarring forms of control such as sextortion scams. The work of Trust and Safety teams is one part of a global movement working to counteract the sexual exploitation of children, a movement that includes regulatory bodies such as NCMEC, leading nonprofits such as Thorn and the Internet Watch Foundation, and organizations led by adult survivors of child sexual abuse. The work of combatting child sexual abuse online ranges from efforts to change recommendation algorithms that drive demand for CSAM to supportive communities that give voice to survivors. 

Working to address this global crisis means understanding this global movement: the laws it has passed, the tools it has built, and the interventions that it has found effective. Whether you are a Trust and Safety team looking to address this challenge for the first time or a tech worker interested in exploring how your work might help address CSAM, understanding how to plug into this movement is the key to creating meaningful change.

US Law and Regulation 

Effective policy is essential to combat the possession and distribution of CSAM. While recent progress, such as the REPORT Act, have strengthened reporting requirements, significant gaps remain. 

Although federal statute 18 U.S.C. §2258A requires US companies to report suspected CSAM on their platforms to NCMEC when they become aware of it; to date, no law requires these companies to implement efforts to detect it proactively. Such efforts could include free tools to detect known and/or potential unknown CSAM (see below). However, companies may be hesitant to implement such tools due to the reputational brand risk associated with the discovery and reporting of CSAM on their platform. In 2024, of the more than 1,900 electronic service providers (ESPs) registered with the CyberTipline, only 296 submitted reports. 

Additionally, while CyberTipline serves as a global clearinghouse for instances of CSAM, no other country mandates reporting CSAM to NCMEC at this time. Notably, in 2024, 84% of CyberTipline reports involved uploading CSAM from jurisdictions outside the US. Furthermore, according to a global review by the International Centre for Missing & Exploited Children (ICMEC), of the 196 countries evaluated, only 38 have mandatory reporting requirements for ESPs at all. 

The Role of NCMEC

NCMEC’s CyberTipline was established in 1998 as a means for ESPs and the public to report incidents of child sexual exploitation. Since then, it has received over 195 million reports related to suspected CSAM. Four years later, the Child Victim Identification Program (CVIP) was created with a dual mission of helping law enforcement locate unidentified victims while also providing them with information about previously identified victims. More than 30,000 victims have been identified by law enforcement and submitted to NCMEC. 

In 2024, the CyberTipline received 20.5 million reports for 29.2 million separate incidents of child sexual exploitation. These reports contained 62.9 million images, videos, and other files. NCMEC analysts review the files and label them based on factors such as the type of content and the estimated age range of the child. A digital fingerprint, or hash, is then applied to the files so hash-matching technology can be utilized to detect future versions of the same files. This not only reduces human review of duplicative content and lets platforms detect harmful images without storing them, it also allows analysts to focus on potentially new or unseen material. These possibly time-sensitive reports can then be quickly escalated to law enforcement. In 2024, NCMEC identified and escalated more than 51,000 reports that were urgent or involved a child in imminent danger.

Additionally, if CSAM is reported by the public and verified by analysts as CSAM, exploitative content, or predatory text, NCMEC can notify platforms to remove the content. The average time it took platforms to remove content was about three days in 2024. To create more resources for victims, in 2022, NCMEC launched Take It Down, a free service for victims to proactively report nude, partially nude, or sexually explicit images of themselves for removal. The images are converted to hashes, which are added to a hash list for Enrolled ESPs to monitor against and proactively remove. In 2024, NCMEC received more than 166,000 hashes from the Take It Down program. The recently passed Take It Down Act grants NCMEC greater powers to support takedown of CSAM.

NCMEC is the front line of defense against the dissemination of CSAM. The CyberTipline and CVIP programs work with law enforcement worldwide for the identification and recovery of victims of online child sexual exploitation and abuse. To understand how platforms identify harmful content to report to NCMEC, it is helpful to understand the tooling that they have available to detect and report CSAM.

CSAM Intervention Tooling

To effectively combat CSAM, digital platforms rely on a multi-layered arsenal of tools combining hash-based detection (for known content), AI-driven analysis (to detect new material), and collaborative frameworks. 

Ultimately, effective CSAM prevention requires a layered approach combining known content detection, novel abuse identification, human review, and cross-industry cooperation. With the advancement of generative AI technologies, emerging threats include hyperrealistic synthetic abuse content.

Developing a Foundational CSAM Policy and Understanding Legal Frameworks

For startups approaching Trust and Safety for the first time, tips on creating a comprehensive CSAM policy might be helpful. Protecting children from online exploitation is a critical responsibility for digital platforms and organizations. Before implementing any technical solutions, organizations must understand what CSAM encompasses and the legal frameworks surrounding it. A comprehensive CSAM policy begins with a deep understanding of the legal and ethical landscape, including U.S. mandates to report apparent CSAM to the NCMEC CyberTipline under U.S. law (18 U.S.C. §2258A and global initiatives like the EU’s proposed CSAM Regulation). Organizations must create clear policy statements demonstrating their commitment to online child safety and align with global industry guidelines, such as those from UNICEF and the International Telecommunication Union, to ensure a holistic approach that covers various technologies, including mobile phones, game consoles, connected toys, and AI-driven systems. The National Society for the Prevention of Cruelty to Children (NSPCC) provides templates for online safety policy statements that can be tailored to an organization's context. Lantern, developed by the Tech Coalition, provides the first cross-platform signal sharing program for companies to strengthen how they enforce their child safety policies.

Technical Implementation and Operations

Effective CSAM prevention also hinges on strong technical infrastructure, effective detection systems, thoughtful team design, and industry collaboration. Tools utilizing industry-shared hash-matching technology enable automated detection and reporting, limiting employee exposure to harmful content. Moreover, with the increasing use of generative AI technologies, scanning for abusive content in both inputs and outputs is crucial. Secure systems and strict access protocols to protect sensitive data are also paramount for CSAM investigations. Equally important is supporting the well-being of staff involved in CSAM investigations with informed consent, adequate staffing, mental health support, and clear opt-out options. Collaboration with industry groups like the Technology Coalition and consistent reporting protocols further strengthen child protection efforts. Ultimately, safeguarding children online demands an evolving, cross-functional strategy that blends policy, procedures, technology, team care, and compliance.

The fight against child sexual abuse material demands our collective commitment and innovation. As technology evolves, so too must our approaches to protecting vulnerable children worldwide. While the challenges are immense—from addressing newly generated content to improving cross-border cooperation—the combined efforts of technology companies, law enforcement, nonprofit organizations, and regulatory bodies are making significant progress. Every implementation of detection tools, every policy improvement, every life-saving intervention matters. Whether you're building a startup that needs comprehensive CSAM policies or working within an established organization looking to strengthen its protective measures, your contribution matters. By leveraging the resources, tools, and collaborative frameworks outlined here—from hash-matching technologies to AI-driven detection systems—we can collectively create a safer digital landscape. We can prevent immeasurable harm to kids step by step by dismantling the systems that perpetuate this harm. The protection of children isn't just a legal obligation; it's our shared responsibility.

Next
Next

Defining Online Paid Content