Integrity Talks Series: How Platforms Engage Governments

By: Nichole Sessego, Integrity Institute Visiting Fellow


For Integrity Institute members, in addition to time with family and friends, indulging in some treasured recipes, and maybe a binge of holiday movies with questionable production value, this time of year is also when we spend the most time trying to answer the question, “So, what exactly do you do, again?”

It was with some version of that experience in mind that we came up with this series.

Below is the first of what we hope will be many conversations with Integrity Institute members and friends that aim to demystify some of the integrity topics that are most often cited as confusing or frustrating. Our goal here is “real talk” about issues that we frequently see misunderstood, dramatized, or which just haven’t been discussed in a useful way for someone who hasn’t worked on integrity issues day in and day out.

To kick off, we wanted to cover one of the highest profile — and most controversial — integrity issues we see in the news: how platforms engage directly with governments, and why.

If you have suggestions for another topic that our experts can weigh in on, let us know by connecting with the Integrity Institute on Linkedin, or by reaching out to me, your moderator, Nichole Sessego.

Our experts have worked with and across some of the most well-known social platforms. They have been in the trenches working with governments on issue areas that include elections, pandemics, geopolitical crises, law enforcement, and more. We all met to have a candid chat with the hope of shedding light on some of the challenges, trade-offs, and misconceptions that integrity teams might face in navigating these relationships.

Key Themes

The majority of our conversation is included below with some edits for length and clarity, but here’s the tl;dr of what we discussed when it comes to the complex and multifaceted relationship between governments and platforms. I hope that the discussion below provides some insight into these integral relationships, and that some of these stories show off more of the humanity behind these decisions.

Education gaps between governments and platforms

There is a wide range in government's understanding of how social media platforms operate, with some demonstrating little technical knowledge to others having quite a sophisticated understanding of how they work. Platforms devote significant effort to educating government officials to close this gap. It’s critical for stakeholders in government to understand the basics of these platforms if they’re going to try to engage on more complex issues, like content moderation policies.

Misperceptions of platform decision-making and operational realities

Governments and media often have misperceptions about how decisions get made at platforms and may make assumptions about top-down leadership. Generally, the experiences of our panelists are that decisions involve input across many — and sometimes, quite large — teams, in addition to the considerations that need to be taken in around operational limitations. It’s likely an extreme situation for policy changes to happen over a few days, and nearly impossible to happen “overnight.”

Prioritization of election support

With finite resources, platforms must make difficult decisions on which elections taking place across the globe to prioritize for integrity support. This can lead to disagreements with some governments. We cover some of the criteria used for this prioritization, which include risk factors, resources available, languages supported, and more.

Nuance and transparency

Both government and media often miss nuances in platform policy decisions and enforcement actions. More transparency from platforms (or from leaks) around internal processes has increased understanding in many ways, but can also be easily misinterpreted.

Geopolitical tensions and changing regulation

As different countries enact divergent laws governing platforms, conflicts between governments are likely to increase. This will significantly impact platforms' operations globally, and we’re already seeing how some of those changes are coming into play.

Panelists

Meet your first panel:


Kicking off with the main question, we know that platforms and governments talk, but the question is why? Why can't governments just use the platforms like everybody else? 

Alexis Crews:
So when we think about platform usage, we're thinking about individual users and government entities are not that. They represent thousands if not millions of citizens.

I think that for governments, it's really an education thing, and then it's opening up channels, and also being a point of contact for when things (crisis, coups, etc) are happening globally.

So what exactly are these education gaps?

Crystal Patterson:
It's interesting because the gap varies. I went into some offices, and literally the elected official didn't know how to turn on their computer. Then there were others who managed their own social media accounts and really understood it well, especially younger members.

A lot of members aren't native to social media, and they didn't understand how posting a picture had anything to do with what they were doing on the floor that day, or the policies they were trying to pass. Getting them over that gap was critical for them to understand the implications of what the products could do in the wider world. If they can't see how this stuff relates to everyday life, they can't really pass good policy.

Then for the ones who were more advanced, it was just digging into a lot of the very challenging problems we would tackle on the teams all day long. Which is, “How do you write an effective policy for deep fakes? Who decides what's true and what's not true? What counts as misinformation, what counts as disinformation? What's the right enforcement mechanism for those things? What is the platform's role in making those determinations versus what the government should be doing?.”

What about governments outside of the US? Were there major differences in the gaps of understanding there?

Katie Harbath:
I would say the gaps were very similar, but just internationally they might lag behind the US and Europe.

So there were conversations that I might be having in other countries that I had had three years, two years prior in the US. It somewhat coincided, too, with, where Facebook placed public policy people in these different regions and countries.

Especially in the very early days, like Brazil, Bruno joined Meta and in his first week he was testifying in a hearing. Then it was very shortly thereafter that he was like, you gotta come down here, in every meeting I’m hosting about policy, they're asking me to help them with their (Facebook) page and stuff like that. 

People are like, somebody from Facebook, great! Here are all my questions! Even if you aren’t the expert on them. 

And then I think, while the gaps might be the same, due to their different cultures or different things, how they thought that we should address them was different. 

Can you say a little more about that, those differences?

Katie Harbath:
Somebody in the Middle East might be really upset by nudity, whereas people in Brazil were like, “why in the world are you taking down all these pictures from Carnival?” They might have a very specific example that they are upset by and why they're upset with it. The commonality is they don't understand how the platform works, and so you had to explain that. With the policies, the differences might be in what they thought that we should actually do about it.

Alexis Crews:
I think that there are two very distinct customers. I would say one is what Katie was describing: There's a really big gap in terms of knowledge and they're a few years behind the US. And then I would say countries like India and Turkey really know the power of social media and the power of technology. They have been able to have very strong and opinionated asks, and they use different policy levers [to advance those asks].

There was an operations team set up to work with public policy and respond to all these inquiries. So, internally, we would have to coordinate with different teams to make sure that these flags coming in from different countries a) did or did not violate Facebook's community standards at the time, and b) then whether or not these (asks) were actually legally binding. So if they didn't violate, what did that look like? Could we say no?

So, I think there are, broadly, two types of relationships with governments, but I would say maybe like 85% of the players are definitely lacking in knowledge, which I think is why public policy and government affairs teams are so important. But so are global operations teams who work on the back end.

Katie Harbath:

[Regarding Alexis’ point about whether or not asks were legally binding] In particular with Brazil, the judicial system was also very important because they're a highly litigious society (for example, one of WhatsApp’s VPs was put in jail). So some of these conversations getting into law enforcement might be kind of out of scope, but it’s also worth mentioning that’s another element, if you're talking about government engagement.

Matt, I'm curious if there's anything out of what's already been said that was different for you? Facebook had a lot more external-facing teams (compared to other platforms), and so I wonder how that compares to your experience?

Matt Graydon:
Yeah, I would say that my experience, both at Twitter and in my post-Twitter life, has aligned closely with what Alexis and Katie have described in terms of government understanding of policy making and processes. Just really running the gamut with some of those interactions.

I had some interactions involving pretty basic, fundamental questions about how policy making and how detection and enforcement work, but I also met with governments who had a really sophisticated understanding, like the Ukrainian government. Obviously they've had an accelerated crash course.

From a civil society and academia side, a lot of those interactions centered around understanding automated versus human enforcement and conveying a sense of the scale of content and what it means to enforce it effectively.

Building off what Crystal said, I found a lot of interest and engagement in just walking people through some of the policy challenges that platforms face and how would you balance these trade-offs, and these principles, and these needs? I felt like that was really quite productive.

One thing that just came back again and again is the desire to have these more operational conversations. “What does it look like? What does the day to day look like? What do nuts and bolts look like?” Rather than talking about these high level philosophies.

One other concern from government and civil society —and this was specific to Ukraine, but I think is probably quite broadly applicable — is with platforms treating different entities within those sectors as a monolith. They check that box of, “All right, I've coordinated with and have a relationship with this government agency,” but there may be other government agencies that focus on similar issues and have their own agendas.

The piece of feedback I heard that really stuck with me was from a journalist in Ukraine, who was like, “Yeah, Facebook and Google engage with us, but then they're engaging with some media sector focused NGOs and my outlet doesn't feel that NGO really represents us well.” But then (on the platform side) it’s, “All right, we're engaging with the wide ranging media NGO, and that's good enough.” So I think it's important for platforms to have that better understanding of the diversity of stakeholders within groups they're engaging with.

We've talked a lot about how so much of the relationship between platforms and government is about managing that education gap. In a similar vein, what do governments often just get wrong about the platforms, or vice versa, what do platforms get wrong about governments?

Katie Harbath:
Nobody specifically said this, but they allude to it often, where members would think there's somebody that's specifically looking at their page and wanting to take action. So there's that misperception of what this looks like and how the companies are making decisions. I think when any of us spoke with these folks, a lot of times they're like, “Well, you're not the decision maker, put me in front of the decision maker.”

I also think that was a big misperception on the outside: A lot of them thought that somebody could make a unilateral decision to fix their problem. And unless that person was Mark Zuckerberg, that wasn't going to happen. And even if it was Mark Zuckerberg, it still might not happen.

Crystal Patterson:
I can't think of a single decision, small or large, that was made by any single person when I was working there, up to and including Mark Zuckerberg.

When there was an important question or anything that had a gray area — and if it made its way to him, then it was certainly in the grayest of areas — he didn't want to be the arbiter of anything and really encouraged input from multiple people.

I'll give an example. We had that manipulated video of Nancy Pelosi that went around, there was a point made to talk to different people from different areas of the team to get their perceptions and ideas about what we should do about that kind of material -- that video specifically -- but it also had to ladder up to a larger policy so that we had some consistency about what we were doing.

Even stuff that went up to Mark had to have that consistency and that integrity in the decision making.

Alexis Crews:
That's a really good example, and I remember that video.

You have input from maybe 20 teams to create a brief and understand (for an escalation), “What policy is this possibly violating? Is this violating?” et cetera, et cetera.

So, when people think, “Okay, we'll reach out to Mark,” it actually doesn't go to Mark. you still go through the same process as everyone else.

I also think, to that point, the biggest misconception is that these policies can be done overnight. It's like a three month period, and it really is an extreme circumstance where we can adjust a policy or draw a new line. That does not happen very often.

Alexis, could you speak to some of the operational limitations?

Alexis Crews:
There are so many. 

But one of the biggest things is that each country does have a different set of rules and regulations. They have a different set of laws. So you have to be diligent in reviewing any request that comes in from those countries. 

Then you have to find translators who are able to translate the text and also provide context. This could take hours to do. Sometimes some of these requests were really from illiberal governments that were trying to get private information from users. Then we have to figure out how to come back to them and say, “No, this isn't actually violating,” and also decide “Are we willing to get sued?”.

There are also four different teams in four different time zones working on this (for Facebook) — a team in Singapore, a team in Dublin, a team in London, and then two teams within the US. Not everyone has the same context and not everyone has the same language skills and capabilities. 

And then these teams are very small. You're looking at teams of less than 100 people who actually review all of this in terms of incoming from governments and from certain Civil Society Organizations and people who are activists. 

Matt Graydon:
It's been really interesting hearing everyone's perspective from the Meta side. I think it also highlights for me an area where there may be misunderstanding from the government, which is that these platforms are very different. 

I know Meta is much larger, with a huge exponential scale of resources, more than Twitter ever had. But [at Twitter], we were also just a lean team that could move quite quickly — each person just had to wear a lot of different hats within the team. So I think that's always [how different companies approach similar challenges] been something that's been interesting to emphasize as well — and I don't think governments necessarily make those distinctions [between companies] as easily.  

Matt, could you speak to making those decisions on how much you're engaging with whichever country, and how those decisions were sort of made?

Matt Graydon:
One of the big challenges when I first started was helping to operationalize and refine that framework for our elections program. Identifying what were the finite resources in terms of people power, products, and policies that are available for elections, and then looking at the global elections calendar and being able to rank and support. It was challenging to have a consistent classification system and to align on these risks. 

Essentially, the system that was put in place looked at a set of risk factors. From threat of regulatory compliance, the need for regulatory compliance, information integrity risks, to the size of the market (or country) in question, and any other related human rights or other specific risks to that election or market, and being able to come up with a rough formulation for prioritization. This was always both an art and a science. 

And there were certainly times when the biggest challenge was concurrent elections — which is why 2024 is going to be such a difficult year.  For example, in 2022, you had Brazil and the US midterms and the (Twitter) acquisition happening all at the same time. It was a real challenge to balance the limited support that was available, burnout, and the need to have flexibility to respond to other fires. 

There was a healthy debate within the company, with public policy stakeholders advocating on behalf of their regions and their government counterparts that they worked with. And I think that was helpful to not have to make a policy or product determination within just that process. To say, “Oh, yeah, this is a low priority election because it doesn't meet XYZ standard and the metrics are fairly low,” but then we could get that important feedback that maybe came from a government figure, from civil society, through public policy in that region, to help have a more informed prioritization.

Crystal Patterson:
That was really hard, I think trying to explain why we decided one election was important and why another one wasn't. 

I think because people saw us as kind of a bottomless well of money at Facebook, it was like, “What do you mean you don't have the resources?” But even if you had all the money in the world, there's still a finite number of people, there's a finite number of hours in the day. That just makes it really tough and those trade offs sound bad to the people who aren't receiving the support that they think they deserve. And I understand how they feel. I think saying, like, “oh, this country over here is more important than yours because they're bigger or whatever” is reinforcing some inequalities that we probably don't want to be responsible for. But at the end of the day, you do have to make prioritization happen, and it's tough. 

Matt Graydon:
Yeah. Just to that point, quickly, there was a pretty robust debate toward the end of my time about making our prioritization framework public and transparent, at least to some degree. 

And exactly the point that you just raised, Crystal, came up. It was going to make a lot of countries and their governments quite unhappy to know that they were prioritized lower. But at the same time, I think pulling back the curtain on how some of these decisions are made could have been helpful in some ways. But, yeah, very tough, never aligned on a decision, and I still myself am very torn. 

Alexis Crews:
Yeah. I think transparency is something that's really hard because we want to be as transparent as possible, but then we already know the backlash, and then there are certain things that we just can't talk about. 

When I was working with the (Facebook) Oversight Board, trying to explain the prioritization model and the different harms and everything, it was very interesting to get their responses in real time. They were not -- and they’re academics and CSOs (civil society organizations) -- they were not happy with how we were thinking about certain situations. 

​​It's a hard thing to tell our stakeholders, and it's a really hard fight internally. There's not a lot of grace given to the people who are actually doing this work or trying to fight for these partners. We don't have all the money and the resources and then human capital. You talk about burnout, Matt. I think everyone who worked on the 2020 election had to take time off because everyone was burnt out. There aren't enough people. People are not replaceable. 

Crystal Patterson:
It's challenging because some of these smaller countries that might be more authoritarian, and for different reasons might not seem like they should be as high priority, are the places that might need the support the most. Having those internal debates about what that should look like, and how we fit in — we had some consistent tools — but we would do different levels. We would decide how much engagement we could reasonably give and do well.

That was the other thing. It wasn't just, “Are we going to engage in this election?” It's like, “Can we do this at the same standard we've maintained for everywhere else?” And I think that often became the deciding factor on whether or not we would do it. Which was: we don't have the language capabilities, we don't have the content moderation capability, we don't have the stakeholders on the ground to help manage the relationships we need, and I think those are actually really good reasons to not do it. If you can't do it the right way, don't do it. 

Katie Harbath:
Pretty much for all elections, we knew we could do an Election Day reminder — there were only a few where we knew that we likely couldn't. Towards the end, you had political ad transparency in most places, and you had fact checkers in a lot of different places. But [election engagement] was very policy partnerships driven, or it was a product that had been scaled on its own to all these countries. 

So at least we could go to places and be like, “Okay, yes, we don't have a fully dedicated civic [POC], and we're not doing an IPOC (Integrity Product Operations Center — a working group made-up of subject matter experts from different teams) for your country, but we are doing X, Y and Z things for that.” It still didn't make them incredibly happy, but there at least was that baseline that you could do. 

Well to keep on theme, how about the media? What does the media get wrong about the platforms, is it different from what the governments get wrong?

Alexis Crews:
How much time do we have? 

I have a really interesting example. MSNBC — this was a few weeks ago when the Israel-Hamas war started — they had to do this whole special segment on “This is how we vet things.” I was like, oh, this is great. Because now you fully understand what social media companies actually have to do to find misinformation and remove it.

Crystal Patterson:
I have a couple of thoughts on this one. 

Early in its days, it was the unicorn story of Facebook and Mark Zuckerberg. He had this origin story — I mean, there's a movie about how he started the company! The personalities behind the technology seemed to trump the actual work. I think that had continued through the years, and then that has actually changed quite a bit (recently). So more often than not, the process story of what's going on behind the scenes that led to a decision is as much of interest to the media as the actual impact of whatever we're doing. 

But I'm thinking of, like, the 2016 election on Facebook. I guarantee you that most people — if you walk up and ask them what happened with Cambridge Analytica — can't actually explain what happened. But I'm guessing most people could recount at least two or three different conversations they read about Mark and Sheryl, or something crazy somebody said in a meeting, because that kind of overtook the consumer information part of the story. That drives me crazy.

Another example, the last time we were dealing with some, relatively speaking, milder unrest in the Middle East (around 2021), we had some members of Congress with large Muslim populations who were really concerned about some of the content on the platform. They were calling the press, and we were getting all kinds of negative coverage because of what was removed and what wasn't. There were real intricacies about what was or wasn't allowed, and why. And again, the nuance of all that gets lost, whether it's the media trying to report on it, or even trying to talk to a member of Congress.

They have a lens that they see this stuff through, and I don't blame them. Their job is to be skeptical, but it often means they don't hear us when we're explaining the nuance of this stuff. It sounds like we're trying to make excuses when, really, these things are just hard.

Katie Harbath:
The other thing I would just add, is the media thinking that, just because the government complains to us, we're taking action.

Matt Graydon:
Yeah, I do feel like the sophistication of the understanding of policy making and enforcement has grown post-Twitter. I attribute a fair amount of that to the Twitter meltdown. As much as it pains me to say — and I totally disagree with the fashion in which it was done, not the least because my name is in there — but the Twitter Files, (provided) that kind of transparency around what kind of discussion goes into making these decisions. 

People are interpreting the Twitter Files with whatever ideological lens they want to put on it, but the better reporting I've seen does speak to what I like to think was the very considered, nuanced, informed, and unbiased approach that the team strove to take. 

Crystal Patterson:
What Matt just said got me thinking, since the Facebook Files dropped, I will admit I had very mixed feelings about all the information she shared. I do think people deserve to have questions answered, and we should have transparency around the work the platforms are doing, and all the things we're discussing here. 

I can say with confidence, any conversation we had internally, I would be fine with people knowing about them. I think we had a lot of integrity in trying to weigh the decisions we were making, and I didn't have any problem being open about that. 

The challenge I had with the Facebook Files was that, at any given time, on any given project, there were dozens — if not hundreds — of people working on a particular issue. 

I was one of literally hundreds of people working on elections. If you had just taken documents from my files, that would have given you such a small picture of what was happening, and also may have included only part of the story. So, if you're somebody who's tasked with trying to poke holes in our policies, yeah, you're going to find a bunch of files that say, “here are all the ways that we could get pinged for this,” or “here are all the places where we could have failings,” but it may not include all the solutions that we've been working on to try to address those. 

So seeing only part of the picture, I think, really does a disservice to the work that goes on across these teams, and I struggle with that.

But I do think it's good in a sense, because Matt's right. Like the Twitter Files, all these things have really spurred conversation. It got people thinking a little differently beyond what they see on a headline — they look at what's under the hood as all these decisions are getting made.And I think that overall is a good thing.

Any predictions on how these relationships might change in the future or have already changed from when you started?

Katie Harbath:
Meta wrote last week that the government stopped engaging with them in July when that injunction came down from the Louisiana judge. So there could be a lot pending with the Supreme Court decision on Murthy v. Missouri, that could change a lot. It'll be interesting to see where the decision comes down, but then also where public perception continues to evolve on this and if that will change it. 

I also think the geopolitics of all this is going to get more challenging, especially as we go post 2024 with all these different countries having their own different laws on this. You're going to start to see more governments fighting with one another about these topics rather than just a government and the government to platform, like individually.


And that’s our conversation! I hope for those of you reading, this pulls back the curtain a bit on the inner-workings and motivations around platforms and how they work or engage with governments. Again, if you have suggestions for another topic, let us know by connecting with the Integrity Institute on Linkedin, or by reaching out to me, your moderator, Nichole Sessego.

Previous
Previous

Pixels and Protocols: A Journey from Gaming Nostalgia to Digital Responsibility

Next
Next

When AI Systems Fail: The Toll on the Vulnerable Amidst Global Crisis