Why Disinformation Campaigns are the Most Lethal Form of Modern Warfare

By Alexis Crews, Integrity Institute Fellow

This piece was first published on Medium and represents the author’s individual opinion.


Terrorist organizations rarely, if ever, are fully defeated; they are merely degraded, usually only for a period of time before they reap death and destruction all over again, albeit with a new name or leader. Misinformation has the potential for the same.

I remember checking my work email in 2020 and reading the handover notes from my colleagues in Dublin that mentioned something about a ‘Russian disinformation campaign.’ My role at that time was as one of the ‘crew leads’ for the 2020 US Election War Room at Meta (formerly Facebook). The job entailed running every aspect of the war room related to the election — from understanding if policies and integrity tools needed updates or creations to dealing with real-time crises. This wasn’t the first time the words ‘Russian disinformation campaign’ had come across my email, but it was the first time we had to devise a plan to investigate and determine if it was a real disinformation campaign led by a cyber farm in Eastern Europe.

The process wasn’t cut and dry. It involved uncovering layers, FB + IG page and friend connections, account history, reviewing infractions, determining locations, assessing possibilities of fake accounts, and finally discerning if there was a web of people or just one individual posting false information intentionally. What I’m describing is similar to what investigative journalists do when reporting or what analysts in the intelligence community compile for tactical decisions. By conducting these deep dives into disparate bits of information, we could formulate a hypothesis that would later be supported by concrete data. At Meta, uncovering and analyzing information wasn’t a job for just one person or team; we pulled people from various teams to investigate accounts for hours before formulating a suitable hypothesis, drafting a write-up, and sending it up the chain for approval on next steps. This was the process to uncover CIB — Coordinated Inauthentic Behavior.

What we did with information flagged as misinformation and disinformation was a process that involved labeling and demotion before the content was reviewed by 3PFC’s — Third Party Fact Checkers, based on priority level (infraction and real-world harm potential). The review process could take anywhere from 3 to 12 hours; while we had a prioritization system, we couldn’t force content selection or control who reviewed the content. If content wasn’t in English, we often leveraged internal Meta staff for translations and context before sharing to grasp the scope and potential threat. We had the responsibility to ensure that content was properly vetted and if the content stayed on the platform or was removed based on factors including Facebook’s internal Community Standards, consultations with our legal teams, and understanding the necessary context around a post to gauge potential real-world harm. Finally, we examined the virality probability of the content based on the user’s footprint (i.e., there’s a significant difference between someone with 1,000 followers vs. 50,000). The Russian Disinformation Campaign was more sophisticated than a politician or celebrity posting incorrect early voting dates or polling locations due to last-minute changes by the Secretary of State. The tactics the war room used for content moderation decisions resemble how legislators draft new policies or how people make judgment calls in everyday life. Our mandate, however, was to prevent real-world harm related to the 2020 Election.

While I no longer work at Meta, I am currently a Resident Fellow at the Integrity Institute, a think-tank focused on integrity across social platforms. Last weekend, following the terrorist attack against Israel by Hamas, a political party and terrorist organization in Palestine, I joined a call with a foreign policy think-tank to grasp what was happening in real-time. I felt at ease listening to Middle East experts discuss events and predict outcomes. Then, when I logged onto Instagram, I saw my timeline flooded with videos of the attacks, photos of children in dire situations, and commentary on the imminent war by self-proclaimed Middle East experts. Most of what I heard on the policy call didn’t reach mainstream channels. Instead, opinions from non-experts, along with bombing clips from 2015, were circulated; Hamas wasn’t labeled as a terrorist organization, and many couldn’t even locate Gaza on a map. I posted an IG story about being cautious around misinformation and sharing unverified information from non-trusted news sources, then shared my own perspective on the developing war. In 2019, I received my MA in International Relations from NYU with a focus on National Security and Intelligence, specializing in the Middle East and the Psychology of Jihadism. Although I transitioned into tech post-graduation, what I was witnessing made perfect sense. My master’s thesis focused on how platforms created havens for terrorist and right-wing organizations to organize, fundraise, and disseminate propaganda, potentially causing significant harm. I spent the subsequent days absorbing information, participating in discussions, and reading about the rampant spread of disinformation on social media platforms.

Disinformation campaigns aren’t new. In fact, before the era of social media, yellow journalism swayed citizen opinions on wars, politics, and the economic states of various nations. Disinformation campaigns led to the communist witch hunts of the 1950s and, post 9/11, to the unjust targeting of Sikhs and Muslims due to mistaken associations between Al-Qaeda and all Muslims. These campaigns have been present in every major election, from questioning President Obama’s American citizenship to rumors of Hillary Clinton’s alleged involvement in a sex ring from a pizza shop basement in D.C. While these examples might seem ludicrous, that’s precisely the point — once someone believes something outrageous, almost any other claim seems feasible. The habit of seeking reliable sources, understanding post authors, and seeing the bigger picture diminishes. We stop using the investigative tools we’ve learned in daily life to critically assess what experts or the general public say. Disinformation campaigns can be orchestrated by authoritarian regimes, agenda-driven organizations, or ordinary citizens. They can begin with a single post or video that gets reshared across multiple platforms, spreading virally.

The war between Israel and Hamas (and by extension, Iran, Hezbollah, and any other country that supports Iran) is complex and squarely resides in the grey. This fact alone makes it easy for disinformation campaigns to thrive because there is no single source of truth. The ‘truth’ depends on the country you reside in, ideology, and religious leaning. Since 2009, following the IDF military offensive nicknamed Operation ‘Cast Lead’ in Gaza, which resulted in the death of 1,383 Palestinians, including 333 children, Israel has engaged in wartime exercises and containment of the Gaza Strip, and we’ve had a front row seat to the carnage. It’s unsurprising that videos from 2015 and 2020 surfaced, and that photos of building demolitions and missile firings were so prevalent. The content was already there, making it a perfect and easy weapon for organizations promoting disinformation to deploy, stoking confusion and outrage.

As the conflict deepens and evolves into an even more intricate geopolitical dispute than it has over the past several decades, the disinformation campaigns from all parties could lead to miscalculations, placing more people in harm’s way and ultimately resulting in untold death and destruction.

 

While experts agree that the amplification of misinformation can increase around critical events, the implementation of design changes on platforms can significantly reduce the spread of misinformation. In 2022, the Integrity Institute, as part of their elections misinformation effort, created a dashboard that tracked misinformation across large social media companies, illustrating the true impact of platform design choices on the amplification of misinformation. What I’m suggesting are not solutions that will be impossible to implement, but ones that complement and amplify the work that trust and safety specialists have done. These can bypass months of testing and internal deliberation. First, it’s important to understand the difference between misinformation and disinformation:

  • Misinformation is the unintentional sharing of untrue content.

  • Disinformation is the intentional sharing of untrue content.

It’s very easy for misinformation to evolve into a disinformation campaign. We know it’s possible because there are reports detailing how this amplification occurs, often with the support of media platforms seeking engagement and revenue.

Below are solutions that both large traditional platforms and smaller ones can implement to protect their users:

  • Create more friction to prevent the easy sharing and reposting of content. Less friction allows users to share and repost content without hesitation. For example, users must take more steps on Instagram than on X to share content (either by DMs, email, etc.), which has proven to slow down the sharing of misinformation. I understand that this inherently goes against the business model of social media companies, but making it harder for users to easily re-share suspected false information has its merits.

  • Remove engagement focused content ranking and recommendation systems and replace them with ranking systems that favor accurate information over engaging content.

  • Agree on industry-wide standards for media provenance, including AI-generated or enhanced media.

  • Enable origin tracking of content (e.g., videos, articles, photos) to identify, flag, and remove content identified as misinformation if it has been recognized on any social platform.

  • Develop an industry-wide standard for labeling content to improve user understanding, and for removing content marked as misinformation by either users or AI.

  • Establish industry-wide notification mechanisms that alert users when content has been marked as potential misinformation and is under review by fact-checkers.

  • Notify content creators when their content is marked as misinformation to help raise awareness about the prevalence of misinformation.

  • Demote flagged content immediately based on its harm level. For instance, if there’s potential for real-world harm (e.g., calls to violence), then the content should be removed until a thorough review of both the content and the profile sharing is conducted.

  • Create industry-wide prioritization and real-world harm standards to ensure consistent tracking and removal of false content across all platforms.

For users of social platforms, there are simple steps you can take to stop the spread of misinformation. However, it’s worth noting that while traditional forms of social media content are the most visible carriers of misinformation, disinformation campaigns are also being waged on encrypted channels, including WhatsApp and Telegram. Here’s what you can do to protect yourself and your friends from spreading misinformation:

  • Put more effort into fact-checking information, even from verified accounts on social platforms. Just because an account is verified doesn’t mean the user lacks malicious intent or that it’s a genuine account rather than a bot. By understanding the source and fact-checking information through a quick Google search or using certain tools, you come closer to grasping what you’re reading or viewing.

  • Trust traditional news sources that adhere to a code of conduct regarding accurate information. They will issue retractions for false information, but primarily, they post only verified data. The Logically app is another tool you can use to help identify false information online.

  • If you encounter content that seems false or misleading, there are tools on all major platforms allowing you to report such content. If this type of content frequently appears in your timelines, consider reposting it alongside factual information to warn other users of its inaccuracy.

  • Check what trusted sources are saying about a topic. If reputable news outlets or officials (e.g., government agencies) are posting similar content, it’s more likely that the information has been verified. The International Fact-Checking Network consistently publishes fact-checks related to viral content and is a great resource.

  • Refrain from sharing content that you suspect may be misleading or contain misinformation.

  • We are missing a prime opportunity in classrooms to teach critical thinking skills and media literacy. While focusing on classics like Shakespeare is important, teenagers are actively using social media. They need tools to distinguish fact from fiction and to employ critical thinking skills. This will help them draw their own conclusions and ask more questions when digesting information.

Lastly, civil society organizations and think-tanks have a responsibility to use resources from the integrity community to help push for and build smart policies and methods that we know work. As the world gears up for another round of global elections, what organizations and technology companies build together now will determine how free and fair democratic elections will be. There will always be illiberal governments controlling narratives, targeting journalists, and activists, but in societies that remain democratic in nature, there’s hope to prevent further erosion leading to dictatorship by stopping the spread of misinformation.

 

On January 6th, I was in upstate NY when suddenly my friend called for me to look at the television and my phone’s ringtone went haywire. For hours, I spoke with my team at Meta about how to respond to the crisis in real time, all while watching the coup unfold on the steps of the U.S. Capitol, fueled by a lie perpetuated and shared by large swaths of the Republican Party. Extremist groups, such as the Proud Boys, weaponized that lie, turning it into disinformation. They then used images from the coup to continue fundraising and to heighten tensions among U.S. citizens. Hamas is doing the same. Governing bodies, like Meta’s Oversight Board, exemplify what slow governance might look like in an ideal world where power dynamics and narratives don’t constantly shift. However, in the face of disinformation campaigns, we can’t afford to wait for deliberations made in a vacuum that take, on average, 3 months and then require additional time for implementation. We need solutions now.

As this war escalates and the world enters a global election cycle, advocates, regulatory leaders, and legislators cannot afford to wait. For citizens who consume social media, employing tactics to prevent the spread of false narratives, which are eroding the social norms upon which this world was founded, is the only way to curb the deliberate creation and sharing of misinformation.

Warfare might seem like a strange term to describe what’s unfolding globally, but we are enduring against the backdrop of global wars — some fought by traditional means and others waged using technology. The IDF, along with countries that support Israel, is embarking on a lengthy endeavor to control the narrative while responding to Hamas. Hamas, described as ‘an entity as much a network, movement, and ideology as it is an organization where its leadership can be killed, but something akin to it will survive,’ parallels misinformation. The lie doesn’t vanish with the removal of the original content, but without that original reference point, much like terrorist organizations, it can fade and lose relevance over time. Terrorist organizations are rarely, if ever, fully defeated; they are merely degraded, usually only temporarily, before they wreak havoc once more, often under a new name or leader. Misinformation carries the same potential. As Israel strives to eliminate Hamas and prevent a similar organization from emerging in the region, the integrity community should urge technology companies to implement these small yet significant changes before it’s too late.

I know firsthand the power of disinformation campaigns. Fighting them in traditional media (print and news) is challenging but has been achieved due to the diligence of journalists and fact-checkers. Combating disinformation through technology domestically is tough and merits precision. Yet, on a global scale, to prevent further violence, doing what’s difficult is essential, even if we don’t succeed the first time. As technology advances without the proper foundation across the industry, we risk becoming victims, unable to distinguish truth from fiction.

Previous
Previous

How Generative AI Makes Content Moderation Both Harder and Easier

Next
Next

Diagnosing Networked Harassment in its Connection to Online Violence Against Women in Politics