Why AI May Make Integrity Jobs Harder

In April 2023, Integrity Institute members held a generative writing session on AI’s implications for integrity work. This piece began at the session and the following members contributed to it: Swapneel Mehta, Sarah Amos, Rebecca Thein, and Soribel Feliz.


With the recent launch of Generative AI (GenAI) products like ChatGPT to the general public, the buzz of AI has risen to a deafening roar. The loudest voices — whether hype within the tech industry or Skynet-style panic from critics — obscure both the harms this technology is already causing as well as the potential ways we as integrity workers can shape its future. As integrity workers we want to identify both the vulnerabilities from a risk mitigation standpoint and any opportunities of influence we could leverage during this moment to build fairer and more transparent products. Like most emerging narratives in Trust & Safety, Integrity workers can stand-up a forum of experts to prepare for the known-unknowns and mitigate risk as best we can for the unknown-unknowns based on past trends and soliciting community feedback to crowdsource sustainable solutions. 

To understand the ways in which changes can occur it is important to learn about the history and the complexities of these products. The technology behind GenAI, including Large Language Models (LLMs) like GPT, is not new, nor being built in a silo. It is instead something that has evolved over a long period of time requiring many other advances from cloud computing and language processing methodologies. What is new is the general and free access to products in the market and the exponential growth of the underlying models based on human interaction. Many companies have been engaged in developing such technology that until now has been inaccessible to the general public. 

As integrity workers we can insert ourselves as the intermediate layer between the AI Product Development Cycle and society. As professionals in the area of “what can go wrong,” we can take our lessons learned and educate AI developers to build more inclusive and accessible products by identifying clear pathways to potential harms. And when things don’t go as planned — which is certainly characteristic of language models deployed as chatbots — we can set-up an information sharing ecosystem for Responsible AI on the Social Internet. 

As with most emergent technology, we are faced with a challenge where the core technological development of generative AI outpaces the creation of robust, publicly accessible testing and evaluation capabilities at a corresponding scale. It is important that we as integrity workers hone in on the novel capabilities made available to actors invested in the misuse and abuse of such products and its impact on the ecosystem so that we can better combat it. Even as the digital ecosystem evolves around generative models for text, audio, images, and video continues to empower end users, we can provide information and support the creation of safeguards against the manipulation of these tools to amplify harms. We can partner with others in a Responsible AI race to collectively drive change earlier in the process. Instead of the norm of deploying features after testing, the current breakneck pace of development has led to a system of “post-hoc safety” where users of the product are providing the testing in production. As integrity workers, we need to insert ourselves in the conversation and communicate the risks as they evolve earlier in the AI Development process. 

Supporting the strategy of integrity-informed AI Development, integrity workers should play a crucial role in keeping their teams, users and the general public informed about AI’s capabilities and even more so, the limitations that exist in the technology today. To achieve this goal, we recommend taking the following steps:

  • Help create AI literacy - both by influencing tech companies’ policies and operations; by working with regulators and local policy makers and through our internal, external, and exterior partnerships (that is, those occurring outside of a role or not contained to a position with a company, which further engages in the industry community through participation in think tanks, like the Integrity Institute).

  • Openly discuss and present concerns related to AI, such as privacy, bias, exposure to harms, and the potential for automation to displace the ‘human in the loop’ (and resultant dangers of this). We can discuss the specific threat types as we understand them today and create a better information sharing environment for the future. Integrity workers can participate in different forums to hold these discussions. Through podcasts, conferences and local integrity meetings/virtual training, or even through social media posts and broadcasts, we can help people understand the broader implications of use while we actively assess the evolving risks of using AI. This includes educating users on the known-unknowns while working with AI developers/organizations on addressing the unknown-unknowns, and how they can incorporate these into transparency reports. 

  • Create a trusted global workforce in Responsible AI. We are all faced with the consequences of how GenAI models are built and deployed regardless of industry or occupation. The way products are coming to market in the AI space can be dangerous if we continue this race to market product deployment approach. Instead of an "AI Arms Race" we should pursue a "Responsible AI Race.” We can do so by partnering with organizations teaching product development best practices or project management strategies to incorporate privacy, bias, addiction and data use/user education relating to AI and underlying algorithms. We can reach beyond the typical channels and we as integrity workers can partner with employers and provide guidance on employee resources for companies, such as recommended language to incorporate into Employee Handbooks. Across the industry, it will be critical to ensure proprietary information, trade secrets or personally identifiable information (PII) is not divulged for the purposes of increased productivity on a task.

  • We, as integrity workers, can supply general considerations for policymakers, regulators and other organizations, outlining the risks that might accompany the use of AI products to perform quick analysis or tasks without a consideration of failure modes. But executives will have to invest in the Integrity workforce, including AI-related training and upskilling, not just add to their current T&S and content moderation workload. Companies must do more than just outsource the work of training and moderating LLMs to underdeveloped countries with abundant cheap labor.

  • We can cross-pollinate the industry with experts to train people to help create a safer internet. Just as we recommend social internet companies prepare for civic operations as best as they can by predicting the outcome of societal reactions before, during and after a civic event (like an election), we can apply this practice in the context of AI. We should learn from other industries who have gone through exponential growth (think the invention and adoption of the rotary action steam engine during the industrial revolution). Or, we can borrow models from organizations such as the Digital Forensics Lab operated by The Atlantic Council Digital Sherlock Program (designed to train people to detect information manipulation on the social internet using OSINT techniques). These examples offer creative insights and practices to building a safer social internet in this new context. 

Products are coming to market at an ever increasing speed. The internet has come a ways from the worldwide web of yore and if anything, we can be sure it is going to be a substantially different place with the scale of multimodal content generation afforded by AI. As a society, we must understand our inability to walk back the real-life implications of building and deploying these technologies irresponsibly. As integrity workers, we can use this opportunity to ‘get it right’ and bring our expertise to bear on this moment.

With the fast rollout of products to a larger user base comes the risk that AI will become a crutch for employee tasks (i.e., “write this code for me”) within their jobs instead of a tool for review (i.e., “check my code”). This mindset could impact how we measure or even talk about individual productivity. As leaders in responsible innovation, we can urge technologists to frame AI products as a tool to check work, not perform it. We can urge company leaders to consider it an enhancement to productivity, not a deterrent.

Previous
Previous

Unleashing the Potential of Generative AI in Integrity, Trust & Safety Work: Opportunities, Challenges, and Solutions

Next
Next

Ranking by Engagement