Comment to PCAST on Generative AI

In May 2023, the President’s Council of Advisors on Science and Technology (PCAST) launched a working group on generative artificial intelligence (AI) and invited public input on “how to identify and promote the beneficial deployment of generative AI, and on how best to mitigate risks.”

Integrity Institute visiting fellows Theodora Skeadas, David Evan Harris, and Arushi Saxena organized and – together with Institute members Diane Chang and Sabhanaz Rashid Diya – submitted comments to the PCAST in July 2023. Below we share a substantive excerpt, while the full comments are available here.


We offer the following responses to the five questions PCAST posed in its solicitation. For each of these five questions, we offer a list of suggested steps that the President can take using his formal authorities and informal convening powers. If successful, we believe taking these measures — all of which support, direct, and encourage the development of new standards, technologies, norms, and citizen-powered safeguard — will help to ensure the Nation’s continued innovation in generative AI systems while also protecting its citizens against the risks this technology poses.

Question 1: In an era in which convincing images, audio, and text can be generated with ease on a massive scale, how can we ensure reliable access to verifiable, trustworthy information?  How can we be certain that a particular piece of media is genuinely from the claimed source?

  • Transparency and accountability: Encourage transparency and accountability among media creators and platforms. Content creators should clearly label and disclose any media manipulations or alterations. Social media platforms and news organizations can implement robust verification processes and highlight trustworthy sources.

  • Independent verification: Fund independent verification of information. Promoting the use of multiple reliable sources, fact-checking organizations, and cross-referencing different perspectives can help establish the credibility of a particular piece of media.

  • Digital forensics and technology: Invest in digital forensics technologies that verify media authenticity. Reverse image searching, metadata analysis, and blockchain-based timestamping offer additional evidence about the origin and integrity of media files.

  • Technological solutions: Require generative AI companies to implement watermarking and labeling of all generated content and to advise users that they should clearly label and disclose when their content is produced by AI.

Question 2: How can we best deal with the use of AI by malicious actors to manipulate the beliefs and understanding of citizens?

  • Increase transparency and accountability: Require transparency for AI systems around their workings and utilization and demand watermarks to verify the authenticity of content. Create new laws and regulations that hold AI developers and users accountable for their actions, whether they are the deployer or the original developer of an AI system. 

  • Support independent fact-checking organizations: Fund independent fact-checking organizations that combat the spread of misinformation and disinformation. Strengthen collaboration between technology platforms and independent fact-checking organizations. Platforms should integrate fact-checking processes into their algorithms and provide transparency about their content moderation practices. Timely identification and removal of misleading content can help prevent its dissemination

  • Develop new technologies and support research to combat AI manipulation: Fund researchers working on new technologies to combat AI manipulation. These technologies include tools for detecting fake news and deep fakes and tracking the spread of misinformation online. Fund initiatives that support interdisciplinary research, development of AI detection technologies, and countermeasures to boost societal resilience against information manipulation such as media literacy education.

  • Stronger regulations and policies: Establish clear guidelines and regulations regarding the use of AI in information dissemination around setting standards for transparency, accountability, and disclosure requirements for AI-generated content. 

  • Create guidelines for AI development and deployment: Provide practical, technical guidelines, as they are currently lacking. There are many ethical, societal guidelines offered by government entities and private sector Responsible AI groups, but companies are lacking tactical standards and guidelines. 

  • International cooperation and information sharing: Share best practices, exchange knowledge, and coordinate efforts between countries, organizations, and technology platforms.

Question 3: What technologies, policies, and infrastructure can be developed to detect and counter AI-generated disinformation?

Technology:

  • Advanced AI detection algorithms: Invest in research and development of sophisticated algorithms specifically designed to detect synthetic media. These algorithms can analyze patterns, inconsistencies, and anomalies in media files, text, and audio to identify potential manipulations. 

  • Data verification and source authentication: Fund technologies that verify the authenticity and integrity of data sources. This includes leveraging cryptographic techniques, digital signatures, watermarking, and other methods to establish the trustworthiness of the sources from which information originates.

  • Natural language processing (NLP): Fund NLP techniques that analyze the content of AI-generated text to identify inconsistencies or errors. NLP techniques can identify language model-generated text that lacks a good understanding of the world.

  • Cryptography: Fund cryptography techniques that facilitate watermarks or other digital signatures used to verify the authenticity of content. This can help to prevent AI-generated content from being passed off as human-generated content.

  • Metadata analysis: Fund techniques to analyze metadata associated with media files, such as timestamps, geolocation, and device information that detect manipulated or fabricated media. Propose and fund initiatives to develop public standards for metadata that can facilitate more widespread support across tools and platforms for countering AI-generated disinformation.

Policies:

  • Government regulations: Outline standards for transparency, accountability, and disclosure in the use of AI for content generation and dissemination. Hold individuals and organizations accountable for spreading malicious disinformation.

  • Public-private partnerships: Foster collaborations between governments, technology companies, civil society organizations, and academic institutions, in order to form  a coordinated approach to detecting and countering AI-generated disinformation. Public-private partnerships can facilitate information sharing, joint research and development, and the implementation of effective countermeasures.

Question 4: How can we ensure that the engagement of the public with elected representatives—a cornerstone of democracy—is not drowned out by AI-generated noise?

  • Make it easier for people to contact their elected representatives: Simplify processes that enable citizens to contact their elected representatives. 

  • Robust identity verification: Fund robust identity verification systems that ensure that individuals engaging with elected representatives are real and authentic. This can involve multi-factor authentication, biometric verification, or other secure methods to confirm the identity of individuals and reduce anonymity and impersonation.

  • Secure communication channels: Fund secure communication channels for the public and elected representatives. Encrypted platforms and authenticated communication tools can ensure that the messages and interactions avoid manipulation.

  • Transparent political campaigns and funding: Implement regulations and disclosure requirements to ensure transparency in political campaigns and funding sources. Implement regulations and limits on the use of generative AI to create political campaign ads.

  • Engage with AI experts and researchers: Collaboration between elected representatives and AI experts, researchers, and ethicists from both industry and academia can help lawmakers understand the implications of AI-generated noise and develop appropriate policies and regulations. Engaging in dialogue with experts can assist in identifying effective countermeasures and ensuring the public's voice is protected.

Question 5: How can we help everyone, including our scientific, political, industrial, and educational leaders, develop the skills needed to identify AI-generated misinformation, impersonation, and manipulation?

  • Public awareness campaigns about AI-generated disinformation: Leverage schools, libraries, and other public institutions to conduct public awareness campaigns that boost understanding around how AI systems can be used to create fake news, deep fakes, and other forms of disinformation, and target with personalized political messages.

  • Encourage people to report AI-generated disinformation: Encourage people to report AI-generated disinformation to a fact-checking organization or to the platform.

  • Open source tools and standards: Fund and promote development of open source tools, design patterns, and standards for detecting and combating AI-generated misinformation, in order to make these more widely available to organizations and companies of all sizes.

Previous
Previous

How Much Has Social Media Affected Polarization?

Next
Next

Pig Butchering Scams: What They Are and How to Combat Them