LA-Bound. AI In The News. What Next?
Oh hey there! I’m typing this at 34,612 feet over Kansas, on my way to the Psychology of Technology conference in Los Angeles.
I like those guys. Ravi and team are doing good work. Check out their substack. It has solid work, including by Matt Motyl, who is both a resident fellow with us and also affiliated with them over there. They also recently put out a design code for social media which has a lot of points I broadly agree with.
Matt is actually going to present at the conference, with his Integrity Institute affiliation prominently included. I’m so proud. Seems like every day there’s a little thing that late happening: a sweet little validation of our influence spreading and us being heard. Glad I could share it. (Here’s a look at his slides. But you probably would better understand what’s up by reading his posts.)
So, I’ll be in LA for the next few days. I’ll be at the conference, we’ll be throwing a happy hour for members and friends (wanna come? Email me!) and I’ll be trying to meet with a few potential funders. I’d love your suggestions of who I should meet while I’m there, especially in that crucial “this institution or person might be excited to donate” category :-)
Okay, onto the topic of the week: AI integrity.
Over the last few days, we’ve seen a flurry of big things going on about Artificial Intelligence. The Bletchley Park summit! The White House executive order! And don’t forget: foundations banding together to announce how much money they’re putting into this stuff. Seems like everyone is talking about “doing right about AI” these days. So where do we fit in?
Well, we’re in an interesting place. We want to remain true to our identity and lane. Mission creep leads to the death of organizations! At the same time, there’s a some good overlap between who we are and the “AI is eerily good now” moment, in different ways.
For instance:
Are you worried about what the new powerful AI will do to our society as it intersects with social media / dating apps / news aggregators / other platforms? Maybe you’re worried about deepfakes and misinformation and societal violence, for example. Or an explosion of bots. I have good news for you — the very same tools we’re highlighting for fixing the internet right now work for these problems too. And it’s even more urgent for companies to put them in place.
(The new, fancy kind of) AI is going to impact the integrity of our information ecosystem — on social media especially. It probably already has. Here’s one factor: spamming, fake accounts, and lying becomes easier. On the other hand, our people, the defenders of information ecosystems, now have new powerful tools. Things are likely going to get weird.
The people in charge of making AI safe/ethical/aligned are doing the same sorts of things we do. AI safety people are our people. The work of tweaking a complex system that no one fully understands, using metrics to understand a little window of how those changes affect its behavior, keeping an eye on emergent properties and tradeoffs — that’s AI alignment, and that’s also feed integrity. The work of looking at a bunch of conversations trying to build systems to detect them going badly, dealing with adversarial behavior, setting up iterative improvements — that’s content moderation and trust safety. See this interview we did with the illustrious Dave Willner and Todor Markov at OpenAI for more.
Our members are literally in the AI companies doing the work. And they keep publishing and thinking about various facets of this problem. Turns out no matter what our plan was, our members are organically diving in! (More on that later).
And, frankly, I’m sure there’s more that I can’t think of at the moment.
Member-driven work and how we talk to foundations
So, it turns out we have a connection to many of these foundations that announced this big AI funding push. So I sent them each a personalized email listing a little of what we’ve done — often by members on their own initiative.
Hey, it has some good stuff in it. Take a look — Here’s a part of the boilerplate that I used as a base:
First off, many of our members are at AI companies right now. We’ve learned that “AI alignment” or “AI safety” work uses the same skills and often literally the same workers as what you might call “news feed alignment” or “social media safety.” (AKA, integrity work or trust and safety work). Same people, similar skills. It’s honestly been surprising to me how much this is in our wheelhouse. Check out this longform podcast episode I did with Dave Willner and Todor Markov at OpenAI — I think it stands out as a high quality and illuminating discussion on that. (And people generally seem to really like it!)
Some highlights of our member-powered work in this area:
On AI and democracy, our recent elections integrity best practices guide has a section specifically on generative AI’s impact. And we just went deep on how to address political deepfakes!
On AI and innovation, we highlighted generative AI’s potential for improving trust & safety work.
On AI and workers’ conditions, we’re helping trust & safety professionals apply their transferrable knowledge to responsible AI work.
On AI accountability, we spoke about how the current moment of AI development parallels the arc of social media platforms, and empowering integrity professionals across the entire tech spectrum would encourage responsible design by companies.
On international AI norms and regulations, our members have commented extensively on not just US but EU AI regulatory proposals.
People seemed to like it! We’re setting up a few calls soon to discuss this sort of stuff.
So there you go! A bit of a sneak peak of how we’re talking to the world. A sense of where I think we might fit with the new focus on AI. What do you think? Does it make sense?
(And, in a bit of fortune, the plane is about to land just as I finish typing this. What a journey!)
I always enjoy hearing back from people who read these emails. Please don’t hesitate to reach out and reply. Thanks for being in our corner.
Yours in service,
Sahar Massachi, executive director and cofounder