From The ED’s Desk: Staff Retreat & Book Feature!
Hello!
Frankly, I’m astounded that we got so much done in November, given that we also pulled out a week-long retreat AND rested over Thanksgiving break. What a month.
Oh, right, I should explain: I’m trying to send you an email about once a month. So this covers roughly November. As I looked at my notes of what the Integrity Institute has been up to in that time: wow. And that’s just the more public stuff. There’s so much to go over. No time for fun digressions or opinions this time – we gotta jump into All the Stuff. So strap yourself in. Here we go.
We retreated to New York
That is to say, we had a staff retreat! Our first one since February. (We had a summit around May, but that was mostly about H2 planning – it felt like one long meeting, rather than a real retreat. This was different.)
The goal was to have more fun, get some team bonding, get to know each other, get staff to become closer to each other and better understand the organization. Over the course of those four days, however, it seemed like many staff had a desire to do things that were more strategy and project-adjacent. So we also spent some time going over ideas for what we’d do differently in the new year, went over our what-happened-in-2023-so-far timeline, and talked about some ideas and plans for the future.
But also – we made little lego figures of ourselves. We took walks. We exchanged books. (I gave away a copy of Teamster Rebellion. I picked up Let Your Life Speak). People went to museums. We shared our favorite breakfast cereal, pictures of us as children, and hung out with members at a happy hour. Also, Samidh joined!
Here are some photos!
I think it was a qualified success! We were balancing a lot. Did you know we have a bunch of new staff? That’s Michael, Sofia, Spencer, plus our resident fellows Laure, Matt, Alexis. We’re in a bit of an organizational flux - we have lots of new staff to onboard, we’re ending the year looking for a managing director, and we’re just closing on a fractional CFO firm and a strategy firm. A lot going on.
Special thank you to Idealist, which hosted us. Thanks to our staff who came into that week with enthusiasm.
We broke some news, we were in the news, and we are in a book.
So, as you might remember from Jeff’s email last week, we did a new research investigation into extremist group activity on Telegram jointly with Wired. So you should already know about that.
What we didn’t mention yet: Jeff Horwitz just published a book: Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets. It came out during the retreat! Our members are in it, our fellows are in it, we’re in it! One of the chapters even leads off with Jeff (Allen)’s story. The end of the book sets up Integrity Institute as the hope for the future. (Though I do quibble a bit with some of the details and framing.) I even did a dramatic reading of that section on the last day of the retreat. Thank you Jeff Horwitz!
Sneak peek: best practices for small startups
We’re launching a new resource on best practices for startups. It’s a big deal. Our members have been working on it for about a year and a half, on and off. Expect more about this from us soon.
As part of that, we’re doing a sort of launch event for best practice for startups with our friends at the Prosocial Design Network. Dec 14th. 1pm eastern. Mark your calendar, hope to see you there.
AI and integrity work are pretty damn intertwined
Speaking of the news – have you thought about AI recently? Yeah.
So as a reminder, there are two major and one extra way we think about AI:
First, AI alignment work is integrity work. AI alignment is extraordinarily similar to feed integrity. And not just that – it’s often done by the same people!
Second, generative AI means our online social experiences are going to change. The bad guys have new, cheap, powerful tools. The good guys do too. The normal people will soon. How will this shape information ecosystems?
Bonus: remember how I said that integrity workers are often also AI professionals? Right now, our members are not just working at AI companies, they’re writing books and blog posts and theorizing about all kinds of stuff.
We have a robust internal discussion group on AI. It meets once a week. Here are some recent topics.
Taxonomy of harms AI might cause
How AI and content moderation interact
How to think about state-sponsored AI
AI and elections
How can we do AI governance
AI and persuasion
AI and sentience
Deepfakes and elections
An open source model for moderation LLMs
Brainstorming legislative priorities
LLMs and moderation
Open models
The history of automated content moderation
Our members continue to be cool people doing impressive things
First off – our members did great stuff with us:
Secondly – our fellows did great stuff with us as well:
Visiting Fellow Grady Ward has a huge, amazing project exploring harms, features that cause them, and design mitigations to fix them. This is a sneak peak. It’s amazing. So much more on this soon.
Resident (and Visiting) fellow Matt Motyl just released a piece on mixed reality adoption & attitudes. Spoiler: only a small fraction of US adults report using this technology. Dive in.
Quick stat: Of the only 4% of polling respondents who have used VR in the past 28 days, 80% used it for gaming. Only literally 1 person used it for work.
Lastly – our members do cool stuff out in the world, and we cheer them on even if they don’t explicitly do it as an II project.
Christine Moellenberndt joined the Initiative for Public Infrastructure’s podcast, Reimagining the Internet, to talk about all things community moderation.
Megan Shahi wrote for The Hill on Meta's election denialism policies.
Alex Rosenblatt's team at Safety Kit partnered with Newsguard to fight mis/disinfo.
Vaishnavi and other community leaders sat down with Rolling Stone to build out this piece on AI-generated CSAM and the risks it poses for children. Then Vaishnavi was quoted in the Washington Post, and then also the Wall Street Journal.
Here’s (some of) the stuff we’re working on right now
We’re gathering members to give group input to the UN! Specifically on the Code of Conduct for Information Integrity on Digital Platforms.
We’ve got over 50 members to fill out our Policy Survey. We found out what our most savvy members think of the promise, perils, and opportunities for public policy to help us achieve our shared goals (aka, a better world for everyone).
We’re about to refresh our Community Advisory Board with our most engaged and values-aligned members.
Our election team, led by Katie Harbath, is putting together a nationally representative survey in conjunction with the Bipartisan Policy Center, States United and Morning Consult.
We’re still working with the European Commission on a DSA, Risk Assessments & Audits Project. This project is focused on addressing questions related to the implementation of the DSA - specifically the articles related to platform risk assessments, risk mitigation and audits. Our goal is to create resources for regulators as they receive the risk assessments submitted by the platforms, by answering the questions on how to do a platform risk assessment, how to assess platform risk mitigation plans, and how to audit a platform.
We’re meeting with a bunch of US state legislators and secretaries of state.
Sneak peaks on what we’ll email you about in the future
Sometimes, we do big things that aren’t the traditional flashy thing you brag about. “We were covered in the New York Times” is easy to brag about. “We set up great accounting systems” – less so. But you know what? We’re gonna try to share that with you too. (But maybe not about literal accounting). Here are some sneak peaks of what we might email you about in the future:
We did our first probably-annual census of our members! Think of it as a 20-minute long survey. Lots of juicy stuff there. Great participation rates. Diving into the data soon.
Samidh formally joined the board of directors!
Resident Fellows – yes! They’re here. About to formally announce the new additions.
Many, many members went public on our website.
(This indeed counts as flashy, but as a reminder) our best practices for startups deck is launching.
Plus, news happened. Some things I’m keeping an eye on:
Arturo Bejar went public. He gave a hearing in congress about how Instagram has done terribly on child safety. “Each week 1 in 8 kids (age 13-15) on Instagram receive an unwanted sexual advance.” He is calling for more transparency. Seems like some damning articles came out in his wake.
(Reminder: At II, we explicitly reject whistleblowing as a strategy. It’s against our code of conduct. We have a policy on confidentiality that explicitly says not to do this. We don’t accept whistleblowers as members. We are pursuing a fundamentally different path.)
My first take: Arturo is right. More transparency is exactly what we’ve been pushing for years. Good transparency. Real transparency.
My second take: it’s no accident that this is about recommender systems and harassment. Or that subsequent reporting is about recommender systems and CSAM. Ranking by engagement is one of the foundational original sins of making a social app. Ranking by engagement creates a gravity well towards bad behavior. Any app that doesn’t patch or fix this fundamental flaw won’t be able to escape the immense potential energy pulling people towards doing the wrong thing, and finding loopholes to keep doing it in the future.
Meta announces that election denial for past elections is kosher for ads. I don’t love this.
It seems really, really reckless.
It’s also unclear how you could say, with a straight face, that this works in favor of people, society, or democracy.
In my view, part of being in a free society means that organizations need to act responsibly. We rely on them to!
BONUS: Things we’re discussing in our member slack
Wix CEO Avishai Abrahami on why the web isn’t dying after all
“This is why you have a policy function.”
The psychological drivers of misinformation belief and its resistance to correction
Got some approving reviews and re-examinations from our members. Confession bear: I haven’t read this myself yet.
The Real Problem With Technology Ethics Is Our Leaders — And How We Enable Them
Don’t lump integrity workers in with the CEOs. Don’t confuse their decisions for our decisions, their values for our values, their opinions for our opinions.
U.S. stops helping Big Tech spot foreign meddling amid GOP legal threats
I don’t see how “hey, we found some Iranian government propaganda spam groups on your app, you might want to check this out” became so controversial
Substack Has a Nazi Problem - The Atlantic
Literally, literally, not figuratively: literally every platform does moderation. Every platform has a line that users can’t cross. (The minute platforms don’t have that line, they’re drowned in spam and die). The only question is: where do you draw it?
How Meta Is Planning for Elections in 2024
In a more ideal world, we’d see every platform post their “how we’re dealing with the deluge of elections in 2024” plan in July 2023. Prepping for election integrity takes a long time! If you start paying attention only a few months before election day – the harm has already happened. The attacks have already succeeded before you stepped up to the plate. (See our Election Integrity Decks for more.)
Okay, so that’s a lot! We’re doing a lot. We’re crushing it.
Hope you liked it. Please reply to this email with your thoughts, or forward it to a few friends.
Stay warm, stay true.
Sahar.