Announcing our Resident Fellows Program

Hello all,

I am extremely excited to announce the launch of the Integrity Institute Resident Fellows program and introduce our first cohort of fellows!

The past year has been a challenging one in integrity and trust & safety spaces. Companies have been laying off thousands of employees, and integrity workers were a part of them. Thanks to generous donations from the William and Flora Hewlett Foundation and the John S. and James L. Knight Foundation, we have been able to offer a new home to some of the people who have been hard at work protecting the social internet from within the companies.

Our Resident Fellow program will help expand the Integrity Institute's ability to work with external stakeholders, support our constantly growing community (over 350 members now!) of integrity professionals, do original research that helps the outside world come to the same insights that integrity professionals get on the inside, and develop the profession of integrity work through community expertise and best practices.

We are so excited, lucky, and honored to have them aboard!

Cheers,

Jeff

PS – And of course, thanks also to the David and Lucille Packard Foundation, the MacArthur Foundation, Omidyar Network, Public Interest Technology Infrastructure Fund, Avaaz, and all our other funders whose general operating support makes this and everything we do at the Institute possible.

Alexis Crews

Alexis Camille Crews is an impact designer and strategist focused on redefining and creating new pathways to ensure equitable outcomes for future generations. Alexis is currently a Resident Fellow with the Institute, focused on the intersection of policy and partnerships - using her knowledge from Meta, where she worked on the Governance and Global Operations Teams, to shape the integrity space. During her tenure at Meta, Alexis was chosen to be part of a small team leading the US 2020 Election War Room, ensuring election integrity for the US election, her scope included everything from policy creation to threat analysis. In addition to the US 2020 Election, she worked on global regulatory escalations, the US Census, and global crises including the Myanmar Coup. Before leaving Meta, Alexis led the Learning and Development for the Oversight Board, to provide them with the tools to make content moderation decisions, and built and led the Global Engagement strategy to design new governance mechanisms for the Metaverse as the Strategic Advisor for the VP of Governance.

Alexis has a MAIR in Intelligence and National Security from NYU Graduate School of Arts and Sciences and is a proud alumna of Spelman College. Prior to joining Meta, she worked in politics and human rights. Alexis is a Council on Foreign Relations term member.

Why is integrity work important?
When I think about the Integrity and Trust & Safety space, I compare it to human right's work, which is integral to how we function as a society. When it comes to virtual spaces - on social media platforms and in AR/VR spaces - there aren't broadly accepted and implemented 'rules of the road' - there is no UN Declaration of Human Rights, which means that users aren't protected and those protections, the values that most citizens abide by in everyday life, go out the window. So the work that we do as people who focus on Integrity is instrumental in the protection of every user who engages with technology on legacy and new platforms. More than ever, it's important to have Integrity and Trust & Safety experts in the room at and outside of these companies. That's the only way change and ensuring equitable user safety can take place. We understand the risks, we also understand what's possible because we created the policies and the mechanisms to track and remove bad actors at these companies. I understand that while change is usually incremental, with the pace that technology is evolving something must be done now.

What is your integrity “hot take”?
Nation states shouldn't exist without guardrails.

At the Integrity Institute
Alexis will help us support our external partners and make sure the expertise and knowledge in our community is reaching and having impact in the outside world. She’ll be doing everything from identifying and building relationships with organizations and policy makers and ensuring that we have all the right content ready to help them learn from our community and make better decisions.

Laure X Cast

Laure has a decade of product leadership roles working on collaboration/communication tech, including as Head of Research for Marco Polo app, where they had the opportunity to work on integrity issues. They are a board member for Prosocial Design Network, a steering committee member for the Council on Tech and Social Cohesion, a steward at the Collaborative Technology Alliance, a mentor with All Tech is Human, a member of Aspen Institute’s Virtually Human working group, and a founder who is slowly working to design and develop a platform orienting around belonging and distributed-power/leaderful collectives. For the last 2 years, they have worked as a consultant helping nonprofits/social impact companies develop UX research and product discovery practices, as well as working with orgs around community design. Laure has a varied career background that includes work around HIV/AIDS, documentary film, and technology. They have been a speaker at SXSW, Mind the Product Leadership Forum, Grace Hopper Celebration, Product Stack LA, BuildPeace conference, among others.

Why is integrity work important?
Integrity work is important because large-scale social media and other platforms have made design and policy decisions that have led to harm, especially for those who are most culturally vulnerable. These choices have contributed to a decay in our democracies and allowed small numbers of people to have outsized negative impact. Integrity workers are uniquely positioned to address these problems from inside and outside platforms.

What is your integrity “hot take”?
My hot take: It’s all about the incentive structure of unchecked corporate money and power - everything else is just treating the symptoms - which we have to do to avoid even worse harm.

At the Integrity Institute
Laure will help us build and strengthen the community of integrity professionals and help ensure that the Institute is supportive both to our members but also integrity workers at large. They’ll be doing things like helping us learn directly from our community members about what their challenges and needs are and developing community events.

Matt Motyl

Matt Motyl is a behavioral data scientist and social psychologist with 17+ years of experience studying attitudes, culture, and technology, and 6+ years in building social technologies that combat problems like hate, misinformation, and intergroup violence. He’s an internationally-recognized award-winning scholar who has published more than 60 peer-reviewed scientific articles that have been cited more than 16,000 times. His research has been featured in many popular press outlets including the New York Times, Washington Post, Time Magazine, and NPR among others. At Meta, Matt was a senior staff researcher on the Civic Integrity team in the lead-up to the 2020 US Presidential Election, and later in the Social Responsibility organization within the company working on COVID-19 research shared with the White House Coronavirus Task Force and leading research on how to improve the way political content is ranked in Facebook’s feed. Today, Matt is a Resident Research Fellow at the Integrity Institute and a Senior Advisor to the Neely Center for Ethical Leadership and Decision-Making at the University of Southern California where he manages a nationally representative panel survey of US adults who use social media and/or artificial intelligence tools.

Why is integrity work important?
Technology is omnipresent in our lives and is constantly playing larger and larger roles in our lives. It can help us do great things as individuals and societies, but it can also cause great harms. Integrity work makes it more likely that people benefit from social technologies and less likely that children are exploited, minorities are harassed, people are misinformed about how to stay safe during global health pandemics, and foreign entities interfere with sovereign democratic processes.

What is your integrity “hot take”?
Engagement-based ranking isn’t necessarily evil, but when it comes to content that is important to people and societies like health, news, and politics, it usually is. In these cases, dumb engagement models need to be replaced with contextualized models that account for whether the engagement is positive or negative, and who is engaging with the content (e.g., is it a highly homogeneous group of people in an echo chamber? Or, is it a diverse group of people from many backgrounds with many different beliefs?).

At the Integrity Institute
Matt will be helping us conduct original research that helps the public have the same insights and intuition about what the problems online are and what solutions help that integrity professionals get from having the privileged view and access to data when you are inside the companies. He’ll be doing work like supporting the Neely Social Media Index, which reproduces some of the standard survey data that companies gather internally, but in an open and public way.

Jenn Louie

Jenn Louie is a recent graduate of Harvard Divinity School and an Affiliate at the Berkman Klein Center at Harvard University. Her graduate research is a compassionate interrogation into how new technologies are shaping our moral futures and how moral conflicts are unintentionally replicated into tech governance and design through unexamined moral inheritances. She is an advocate for improving moral literacy for technologists and believes in cultivating innovation as a moral practice. Her latest research interests lie at the intersection of moral futurism, AI governance, design systems, youth and media, social media governance and the compounded impact on global affairs, society and diplomacy.

Prior to graduate school, she served as the former Head of Integrity Operations for Pages, Groups, Messenger, and Events platforms at Facebook. She previously held positions as the first Head of Trust & Safety at Meetup and originally established her integrity career at Google with a focus on new products and monetization policies before working on new product strategy and operations. Jenn has industry experience in a wide variety of integrity and safety issues, online user policy development, content moderation, scaled enforcement operations for social media, online product integrity, online enforcement tools, and community support operations. Jenn has spoken on online risk and tech policies at SXSW, IDEO, law schools, Techweek NYC, the NYPD Cyber intelligence and Counterterrorism Conference and the Microsoft Social Computing Symposium.

Why is integrity work important?
Increasingly the governance of society and its conflicts are shifting out of the public sector and into the private sector. Integrity work has a growing authority over our moral futures and diplomacy in ways we can't fully comprehend yet. I believe it has the great potential to alter the course of humanity's great moral conflicts and governance and likely already has.

What is your integrity “hot take”?
My "integrity hot take" is that I believe prioritizing optimizing for scale for integrity problems has led to the replication of colonial governance structures.

At the Integrity Institute
Jenn will be helping us develop the discipline and profession of integrity work and responsibly building the social internet. She’ll be digging into topics like what should be the ethics and values of our profession and what a curriculum could look like for certification and licensing.

Tom Cunningham

Tom Cunningham worked as economist and data scientist for five years at Facebook and one at Twitter, working on (among other things) ranking, content moderation, and company strategy. His work has been extensively quoted in many publications, in the House report on Competition in Digital Markets, and the House report on January 6. Since resigning from Twitter in November 2022 he has been writing about content moderation.

What is your integrity “hot take”?
The most important cause of bad stuff circulating is peer-to-peer distribution (chain letters, chains emails, reshares, retweets, forwarded messages), not ranking by engagement.

At the Integrity Institute
Tom actually joined us very early in the program as our first, and a bit experimental, fellow! And he is now rolling off now to go work on AI directly at a company. At the Institute, Tom led our AI discussion group and helped the community discuss and develop consensus around how AI will change integrity work and the online information ecosystem. He also produced several pieces:

  1. Suspensions of prominent accounts across platforms

  2. Ranking by engagement

  3. AI and communication

  4. The history of automated text moderation

  5. Social media and polarization

Jeff Allen

Jeff Allen is the co-founder and chief research officer of the Integrity Institute. He was a data scientist at Facebook from 2016 to 2019. While at Facebook, he worked on tackling systemic issues in the public content ecosystems of Facebook and Instagram, developing strategies to ensure that the incentive structure that the platforms created for publishers was in alignment with Facebooks company mission statement.

Previous
Previous

How We Helped with the Senate Hearing on Child Safety Online

Next
Next

Announcing: Our Membership Page