We’re at a Tipping Point as a Profession

Hey friends, it’s been a while since I emailed you. A lot has been happening behind the scenes.

Here’s one sliver of it — Sean and I went to London! I chaired the Trust and Safety UK Summit. I moderated many panels. We met up with a few important institutions that I’m not sure we can name publicly right now. I was a guest the Tech Can’t Save Us podcast (listen here!). And I spent a good amount of time talking to current and future members. It was nice.

Seeing members and constituents at events like this really is important, energizing, and grounding. While I  focus this email on a thing I said, honestly the best part of this trip was listening to, learning from, and making plans with members. (Plus — people who are eligible to be members). There’s a version of this email that focused more on that, or more on the panels, ideas, and other things I heard at the summit. Why am I not sending that email? In part because it was a blur! Too much good stuff to talk about. And in part because I really am proud of this essay below, and I’d appreciate you reading it.

As chair, I had the honor of giving the opening remarks. I spent a good amount of effort on it alongside Alexis Crews (resident fellow) and Sean Wang (staff). It’s a good speech, and a good essay, I think.

Thank you Alexis and Sean! And especially Alexis — did you know she’s a former speechwriter? In our time together she has surprised me with many hidden talents.

(Actually, Integrity Institute and Rebooting Social Media at Berkman-Klein are co-running an event today and tomorrow on Trust and Safety in the Majority World. I’m drawing heavily from this speech for my remarks at that event as well. Will this become a “stump speech” for me? Well, that in part depends on you. Please take a read and let me know what you think)

The good (and bad) future of the industry

My opening remarks at T&S UK summit — with minor edits, and interspersed with photos from the event:

Good morning, my name is Sahar Massachi. I run the Integrity Institute, which I co-founded three years ago with Jeff Allen (who’s not here today).

What is the Integrity Institute? It is a think tank on how to fix the internet. At least, big chunks of it.

Why should you believe we know what we’re talking about? You should believe us because we’re powered by our members, some of whom are with us today.

Our members are integrity professionals, like everyone in this room.

We are here because our work is at an inflection point. Layoffs, AI, new entrants, regulation, the rise of vendors, and more. We can, together (if we want), push for the good future that inflection point offers. If we don’t, we’ll get the bad one.

Over the two days that we’re gathered here, we’ll work together to advance the practice and theory of our work.

But why? What’s the deeper point? And what’s the good future we need?

  • First: the practice and theory of our work needs to spread. Everyone at our companies – the product teams, the growth teams, the policy teams, the ops teams: all of them can and must build responsibly. We can help our colleagues understand how and why.

  • Second: our work also needs to change. Our job has to involve shaping the product our companies are building, shaping decisionmaking about the product, and making sure that we aren’t brought in as firemen when the entire city is made of matchsticks and kerosene. We don’t just need to be hired, we need to be powerful.

  • Third: we know changes are coming. Legislation, regulation. It needs to help us do our jobs well. We need to make sure we become empowered because of it,  instead of forced into the compliance hell of checking irrelevant boxes rather than solving problems.

Those are the three main stakes that I see for advancing the practice and theory of our work: spreading it within companies as product design standards; empowerment of our profession; and building our collective expertise to meet this moment of external, regulatory developments.

Now, I want to take us on a journey focused on integrity.

I worked at the company formerly known as Facebook for almost 4 years. I started in growth and then moved to a smaller part of the company, called Civic Engagement. We were doing the good stuff - like helping people figure out when, where and how to vote. It was a garden of Eden. Then the scandals happened. It was 2017. First Cambridge Analytica, then the IRA scandal, and then the scandals that just never stopped. We shifted our work from civic engagement to civic integrity, complete with new name.

It was there that I saw, with my own eyes, how the goals of the civic integrity team were not the same as the rest of the company. In a big way, but also in a prosaic, everyday way.

As you all know, our work involves trade-offs.

Sometimes, the tradeoffs are small.

Other times, we end up in situations where literally the goals on our team’s H1 planning docs, if hit, would make the goal metrics on other teams’ planning docs go down. And the other teams often really don’t like that.

Does that sound familiar?

I saw, again with my own eyes, policy choices, product choices, organizational structure choices that sacrificed the safety of our users for short-term business growth.

Choices that, it turns out, hurt the business and product quality in the medium- and long-term.

Choices that also led to the erosion of trust in the company and the products we were creating. (The erosion of trust by our users, by society, even by people working there.)

The work that teams like the one I was on - civic integrity - was being squandered. Worse than a crime, it was a mistake.

And so here we are.

Many of us here are working at these companies. They graciously sponsored this summit, and they have hired us to do this important work! We’re on teams of smart, passionate, and committed people who just want to do the right thing.

And when I say the right thing, I don’t just mean the right thing for people and society at-large, but also for the company we’re working at and its long-term product health. Its business health!

There’s a story you can tell of do-gooders trying to convince the company to do things against its interest, because, you know, ethics! That’s an okay story. It’s not entirely untrue. There’s a more real story. A more important story. That’s the story of professionals trying to save the company from itself.

Sure, sometimes there’s a tradeoff between safety and growth. Often, that tradeoff is only real in the short term.

The product of our companies is the interaction between users and content on their apps. We are the people who care about that product quality. We’re the ones who care about long-term (even medium-term!) viability and growth of the platform. And, yet, we’re the ones who often lose those internal battles. Here we are.

It’s exhausting isn’t it?

As I look across this room, I know there are people here who have saved lives. People who have busted scams. Broken up crime rings. Stopped sex trafficking. Saved children. Stopped bullying, harassment and abuse. Prevented suicide. Protected democracies… and the list goes on.

Because that’s what we do. That’s why we got into this work.

We know the darker corners of the internet. We also know the brightest, coolest parts too. We’ve seen it all.

Everyone in this room is a hero.

But, do you feel like a hero? Do you feel like anyone – even your co-workers – understands what you do?

This work is isolating and it can be lonely. What everyone in this room does is highly nuanced, technical, and specific. It’s a profession, and we’re working in a world that doesn’t understand that.

Every day, I see, in the news, people who talk a lot about our jobs but don’t understand even the basics. Yet they are on television talking about our work and you are not.

And sometimes that’s lonely.

Sometimes it’s isolating.

That’s one of the reasons why we built the Integrity Institute.

We created an organization that is powered by us. (People working in the industry! People who get it – because we work on it every day! People like you! People like me!)

Yes, the Integrity Institute is a think tank created to figure out how to fix the internet - and we're here because people like us need a place for us. A place to figure out what we know. One person has an opinion. Many people together have expertise. And then find our collective voice and use it to create that good future.

Our collective voice is important.

Our voice to policy makers, who hungrily want to understand how platforms work, but need people they can trust to help them.

Our voice to our friends in companies (including our own) - sharing what works best, helping them do it right, and getting closer to everyone in the company building responsibly.

And even our voice to civil society and the press, defending our friends and coworkers when they get unfairly attacked. Giving companies kudos when they do the right thing (because we’re independent and trusted to be fair that way). And also helping civil society understand what the real problems are and how to get to that good future.

We are the experts and the world is hungry for our expertise. And that’s why we’re all gathered here today. To share our expertise with each other.

By the way, speaking of sharing this expertise to the world, I have good news. We’re already doing it! The Integrity Institute has worked with regulators and policymakers to bring your voice to the development and implementation of flagship tech regulations, including Online Safety Act here and the EU’s Digital Services Act.

We’re sharing our insights (as a profession) with the UK and EU on things like: what comprehensive risk assessments can look like. Why they could be really important. How to avoid compliance hell. What are the data that regulators should look for from platforms as they validate claims made in risk assessments. And how to mitigate those risks.

And, as you know, we have policymakers in the room with us. We have regulators in the room with us. Jessica Zucker from Ofcom is speaking right after me! And the UK is taking a fun leading role here in tech policy. The Online Safety Act. The UK AI Safety Summit was a big deal. 28 countries and a bunch of companies (who don’t necessarily all like each other) agreeing on subjecting AI models to safety tests before release. I wish 28 countries and all the companies would agree on something similar for social media platforms.

Beyond regulation, the UK, and more specifically London, is one of the leading hubs for the tech industry and our profession. Which is one of the reasons why this conference is being hosted here and not Brussels, San Francisco and not New York. (Sidenote: I am based in New York, and please let me know if you’re ever in town!)

Over the next two days, we’re here to discuss what short-term solutions can look like. What evergreen solutions can look like. We’re also here to figure out ways to get to the good future. That probably looks like collaborating across industry. Supporting institutions. Meaningful transparency. And more.

Lastly, we’re here to figure out how to do more with less — and how to stop being given less in the first place.

We’re at an inflection point. I want us to come out of it thriving. We must come out of it thriving.

We’re not a cost center. Our work is not an add-on. What we do is not a nice-to-have. We are, simply and truly, the guardians of product quality at our companies. And also, if I may, the voice of that to the world.

Thank you for being here. Thanks for all that you do. Keep it up.


That’s it! I hope you liked this essay / speech. For me, the speech provides a nice window of where we're at as an industry and organization and where we can go. Thanks for being on this journey with us

- Sahar, with thanks again to Sean and Alexis for helping prepare these remarks, Sofia and Alexis for prepping the trip and Sam Lehmann of IQPC for doing the lions work of setting up the Summit.


PS — Jeff and I will be at the Skoll World Forum in Oxford in a few days. Will you be there? It’ll be my first time — seems like it’ll be fun!

Next
Next

Integrity Institute Comments on DSA Election Risk Mitigation Guidelines