1 Year at the Integrity Institute: What I’ve Learned

The Integrity Institute hit its 3rd birthday this month. Around the same time, I also reached 1 year of working at the Institute in a role trying to support the development of a think tank for integrity workers. On hitting my one year mark, I reflected on what I’ve learned from the past year working to bring the expertise of trust and safety professionals to the public conversation. 

1. Integrity workers care about making the internet a better place. 

Talking with policymakers, one starts to hear tech companies painted as monoliths that only care about profit and don’t care what harm their products may be perpetrating or enabling against users. Part of the Institute’s role is clarifying that there are in fact people inside these companies that are trying to make products better, and they are not as successful in their jobs as they could be because the business centers of the companies win out. 

And this point has been driven home to me over the past year. Painted as censors, doxed and threatened, integrity workers have borne a large part of the techlash from certain circles in the past few years. Not to mention facing hostility from inside their companies, where in addition to working in often adversarial environments where their recommendations and warnings may be ignored, their teams (often seen as cost centers) have been divided up, reshuffled and laid off. 

However, in my observation, this is a group of people who by and large try to continue having impact even when their company has cut their position. They care about the well-being of the online world, even when they’re not being paid to. Given the chance, they speak passionately about the problems they’ve seen and researched firsthand. They are generous with their expertise and energy, and eager to find ways to contribute to the advancement of the field. 

2. Integrity is not just content, and it’s not just social media. 

To my colleagues and people in the trust and safety space, this is really obvious. But to much of the public (and, for example, many of my relatives), integrity = content moderation. The image of trust and safety workers carefully turning knobs on algorithms to hide or amplify content, or taking calls from Mark Zuckerberg to delete certain accounts lives in people’s imaginations. Of course, people work on content policies and product development which may, I suppose, involve some amount of knob-turning.

But I’ve come to see integrity workers as somewhat like the anthropologists of the Internet, studying how people behave, interact and trying to understand the ways in which the infrastructure of the online world shifts our social dynamics. They do this on social media platforms, payment platforms, B2B platforms, at AI companies, dating apps, search engines, and more. They are trying to identify, study and mitigate the different ways harms can manifest in all these situations, as humans interact with each other through technological infrastructure created by these platforms. 

When we’ve received questions from lawmakers and regulators asking how to fine-tune requirements for platforms to do x, y, and z to find, report and remove illegal content, integrity workers repeatedly tried to reframe the conversation. It’s not just about content moderation, it’s about behavior, actors, and ultimately, the design of the platform. Which brings me to my next point:

3. Policymakers have energy, but they need expertise.

It’s no secret lawmakers are fed up with tech companies and want platforms to be doing more to improve safety. But the question about how to achieve that (and even what safety looks like) remains open. I’ve observed conversations between integrity workers and lawmakers over bills to ban the use of algorithms where integrity workers were able to explain why “Algorithms Are Bad” is not the most effective policy position. Many policymakers want help fine-tuning their granular guidelines for platforms’ content moderation operations. And again, integrity workers have attempted to reframe the issue to something more effective. 

An internal survey of our members showed that integrity workers want policies that can help them do their jobs and make platforms safer. The mechanisms they identified as having more potential to do that include things like risk assessments, transparency requirements, and audits. That is: things that create incentives and accountability, not specific policy or design requirements. Integrity workers –the people inside the companies making the cases to leadership – are poised to help inform what incentives will be impactful, and what will just be burdensome. Policymakers – perhaps rightfully – don’t trust tech companies, so many tend toward producing bills with specific and rigorous requirements on content policies, and platform design stipulations. 

However, demanding that platforms pour money and time into intensive measures that may be limited in their effectiveness and relevance takes resources away from the teams within the platforms that can understand the ways bad actors are abusing the platform and tailor responses to each platform’s shape and design. Policy that instead focuses on creating external incentives will have more impact, and not be bound by the current state of technology and platform configurations in which we find ourselves today.

4. Integrity workers are thinking a lot about AI. 

The people who have first hand experience watching social media going off the rails while they tried to convince company leaders to hit the brakes or change course are watching the AI space and seem tense about how things could go. I’ve seen (to quote Katie Harbath) a responsible amount of panic about the pace at which tools are developing and being rolled out. 

Integrity workers  are people who are not afraid of technology, who largely embrace it (“I’ll see if ChatGPT can write up a draft for me” is something I heard in a meeting this week), but also are very realistic about the risks and the fact that conscious decisions must be made to prioritize and invest in safety. Not everything from social media translates directly to the AI space, but the integrity workers I’ve interacted with are eager to bring their expertise to the table. 

5. There are not easy answers, but unlocking the expertise of this group is invaluable. 

I’ve been impressed by the appetite of our members to learn from each other and engage in discussions – and they don’t always agree. It turns out that “reducing harm online” is an ambitious project that has different approaches and tradeoffs and people don’t always agree on the same value judgments. People disagree on whether end-to-end encryption is compatible with safety. People disagree on whether engagement based ranking is all bad (although, by and large, they agree: it is pretty bad). Whether an intervention is effective may depend on the type of product, what features it has, and who uses it. This is why most of our best practices for companies consistently include things like understanding your goals in a particular area, understanding your risk profile, studying how your product is being used and measuring specific indicators, red teaming, and iterating on your policies (be flexible to incorporate new learnings). 

There is an appetite among civil society and policymakers for the expertise and insight from our community. And these insights have an impact: a large part of my first year in this role was focused on helping our Elections Integrity Working Group publish a guide (or two) on best practices for companies’ elections integrity work. 

When the guides were launched, we talked about how our expectations for impact were focused long term. It might take awhile for interest in the resources to grow, and impact would build over time rather than making a big splash or being taken up right away. Mainly, we hoped members might share it within their companies as a resource for people working on elections (and many did!). 

Then earlier this month, the European Commission released draft guidelines for platforms on mitigating risks to elections – and they cited our Guides as existing best practices. It’s encouraging to see that the expertise of this group of professionals is taken seriously and can have concrete impact as one of the most ambitious tech regulation regimes rolls out. The expertise of integrity professionals will continue to be key as we move from broad-based demands to “do something” about the harms we see on online platforms, towards proactively building an Internet that we actually want.


Abby Lawson is the Research Project Manager at Integrity Institute. Abby’s background is in the think tank space, producing research and managing programs at the intersection of tech policy, cybersecurity and international relations. Her work has covered a wide range of topics, from cyber insurance to UN negotiations on cybercrime and responsible state behavior in cyberspace – and most recently, content moderation and platform regulation. She is based in New York City.

Previous
Previous

The DSA is Live! So… How are Companies Adapting?

Next
Next

Why Is Instagram Search More Harmful Than Google Search?