Trust & Safety: Navigating Social Media Integrity in an Election Year

By Rebecca Scott Thein, Technical Program Manager at Blackbird.AI and Integrity Institute Member.

Want to discuss cybersecurity and democracy? You can find time to chat with Rebecca at TrustCon 2024 here.

This piece originally appeared on Blackbird.AI’s blog.


4.1 billion people across 64 countries are heading to the polls this year. Democratic institutions are still strong, but they’re also under attack. The social media-powered proliferation of AI-enabled narrative attacks caused by misinformation and disinformation threatens democracy around the globe. 

I know this because before joining Blackbird.AI, I spent the past five years building Trust & Safety (T&S) programs to support major social media platforms. I’ve seen how misinformation and disinformation narratives ripple across the web. I’ve gained valuable insights that have transformed my approach to tackling this complex issue. I helped establish an integrated T&S solution at a major social media company focused on election integrity systems. The work was both rewarding and challenging. Our team faced the daunting challenge of ensuring the platform remained a space where meaningful, accurate discourse could thrive despite the ever-changing landscape of online misinformation and disinformation. For example, the challenges faced by T&S teams during the Brazilian elections underscored the intricate balance in managing election integrity on social platforms. Back then, our teams built solutions to detect and counter malicious behaviors, which required a nuanced understanding of user dynamics and a mechanical awareness of how misinformation and disinformation spread. We implemented advanced machine learning algorithms and human review systems to identify and remove content that violated our policies while promoting authentic participation. 

LEARN MORE: What Is A Narrative Attack?

Collaboration across multiple teams was crucial, as we balanced swift enforcement with transparency to preserve public trust. This also meant evolving our strategies during significant global events by constantly learning and adapting our defenses to counter new threats. Although the task was formidable, seeing our systems in action, mitigating false information, and protecting civic engagement brought a sense of fulfillment and reaffirmed the importance of our work in maintaining the integrity of the online public square. 

Every modern politician relies on social media platforms to promote policy ideas and get out the vote. T&S practitioners at social media platforms have been preparing ways to ensure democratic values can be expressed freely while ensuring misinformation and disinformation do not impact civic engagement. Yet, enacting the T&S team when a situation strikes is more challenging than preparing. Preparation and ruthless prioritization occur in times of ‘normal operations,’ when your team may ramp up to support a certain event, even hiring additional staff and contractors.

LEARN MORE: Use Case: Why Government Leaders and Policymakers Need Narrative Risk Intelligence

A Brief History of Trust and Safety

The T&S profession emerged in the late 1990s, initially centered on protecting users from spam and online scams on platforms like eBay and Yahoo. Over time, this field expanded to encompass a broader range of responsibilities, such as combating child exploitation material and managing community standards in social media. Milestones like Microsoft’s 2009 creation of PhotoDNA—a tool designed to tackle child sexual abuse material—and the introduction of transparency reports by Google and Meta marked a significant shift toward proactive safety measures and accountability in the digital sphere.

Despite being a relatively new field, T&S has become a vital aspect of the digital economy, ensuring safe online environments by detecting and mitigating various forms of harm. T&S is a revenue enabler and key driver in building user trust and credibility by prioritizing online safety. However, its importance transcends financial implications: T&S stands at the forefront of highly politicized debates over free speech, content moderation, and digital governance.

Contemporary T&S teams operate across diverse digital sectors—from social media platforms to online marketplaces. Each team must assess unique risks, crafting strategies that reflect organizational dynamics, market pressures, and regulatory requirements. While “integrity” and “trust and safety” are sometimes used interchangeably, they embody a shared commitment to creating a safer digital space.

LEARN MORE: Trust and Safety Curriculum (TSPA) 

Challenges and Evolution

During planned crises like elections and unplanned events like natural disasters or pandemics, T&S teams experience heightened scrutiny and polarization. This pressure can lead to difficult prioritization decisions about which interventions to pursue. Teams ramp up in normal operations, often hiring extra staff and contractors to prepare for major civic events. However, layoffs and restructuring frequently follow as political and financial pressures fluctuate, resulting in loss of institutional knowledge and operational continuity.

Mass layoffs across the tech sector in 2022-2023 and new regulatory requirements imposed by the Digital Services Act (DSA) have significantly impacted T&S functions in recent years. Many consumer social media companies cut their T&S teams, leading to uncertainty in addressing emerging online risks. Nonetheless, these professionals remain committed to safeguarding the digital public square in this crucial election year.

LEARN MORE: How to Combat Misinformation and Disinformation: Lessons from a Social Media Trust and Safety Expert

Election Integrity

T&S teams typically assign elections a tier or risk level based on political engagement, market participation, regulatory landscape, and reputational risk. The policies enacted aim to clarify user rights and establish guidelines for mitigating misinformation, inauthentic behavior, and other potential harms. External entities like fact-checkers, researchers, and commercial content moderators often aid in detecting policy violations, but recent restrictions on data access and layoffs have hampered these efforts.

Despite these challenges, there are opportunities for former T&S professionals to engage through fellowships, task forces, and academic research. Building coalitions with think tanks, industry associations, and advocacy groups can provide valuable support in safeguarding democratic discourse.

Milestones like transparency reports and improved content moderation tools have created a foundation for more resilient civic engagement. However, platforms must remain vigilant against emerging narrative threats, particularly synthetic media, which could significantly distort public perception during elections.

LEARN MORE: How Compass by Blackbird.AI Uses Generative AI to Help Organizations Fight Narrative Attacks

Lessons

I’ve learned valuable lessons in Trust & Safety that might be helpful to others who are combating misinformation and disinformation, as well as promoting a safe online environment.

Lesson 1: Real-time Monitoring and Swift Intervention

One of the most critical lessons I’ve learned is the importance of real-time monitoring of online narratives. By utilizing advanced tools and techniques, such as those taught in the Atlantic Council’s free OSINT training program “Digital Sherlocks,” I can now detect misinformation swiftly and effectively, ensuring timely intervention to safeguard online discourse.

Lesson 2: Diverse Perspectives and Cultural Sensitivity

Managing language capabilities and cultural context in real-time has taught me the necessity of diverse perspectives and cultural sensitivity. I now prioritize building inclusive teams and leveraging multilingual resources to address global misinformation challenges effectively. Through my work with Integrity Institute and All Tech is Human, I’ve realized that information sharing and gathering global representation surrounding these issues are key to developing comprehensive strategies.

Lesson 3: Preserving Institutional Knowledge and Bridging Objectives

Following tech layoffs, I’ve understood the significance of sharing information and preserving institutional knowledge. I now advocate for broader dissemination of knowledge, empowering teams with insights into platform complexities to enhance their effectiveness in combating misinformation. I also encourage Trust and Safety professionals to engage with industry leaders’ social media accounts and newsletters. One personal favorite of mine is Anchor Change by Katie Harbath, which offers valuable insights into election integrity and strategic foresight, helping us navigate the complex challenges ahead. As we navigate this inflection point, we must unite, embrace emerging challenges, and prepare for the future beyond this year’s elections.

Additionally, I aim to bridge the gap between the mission of combating misinformation and disinformation and the business objectives. While pursuing truth is noble, it’s equally important to consider the financial implications and balance accountability and strategic decision-making.

The intersection of challenges and opportunities in the Trust and Safety landscape emphasizes the critical need for businesses and governments to invest in narrative intelligence platforms that leverage AI and experienced analysts to respond to the growing volume, velocity, and sophistication of misinformation and disinformation, safeguarding democratic processes, and countering foreign adversaries’ narrative attacks. 


Want to discuss cybersecurity and democracy? You can find time to chat with Rebecca at TrustCon 2024 here.

Previous
Previous

Mid-Year Verdict: The State of Global Digital Election Integrity in 2024

Next
Next

We Worked On Election Integrity At Meta. The EU – And All Democracies – Need to Fix the Feed Before It’s Too Late