Could This Be Global Tech Policy’s Biggest Year Yet?
Integrity Institute briefed funders and partners on bridge-building for global tech policy
Context
In May 2023, the Integrity Institute held a briefing – “Could this be global tech policy’s biggest year yet?” – that brought together integrity professionals working directly at and with social internet platforms, policymakers and regulators, civil society organizations, and funders. The briefing was by invitation only to enable a thoughtful discussion regarding priorities, challenges, and potential solutions in regulating and governing the social internet. This summary has been created to express themes of the discussion while protecting the privacy of the individuals involved.
Key takeaways
With how many countries and regions are developing major tech policies, there is an unprecedented opportunity right now to advocate for a healthy social internet together.
The extent of harm caused by social platforms is often determined by platform design and governance. Therefore, tech policy should incentivize companies to do the right thing and to mandate changes when necessary.
Most regulatory advances in platform regulation are happening outside the United States right now. It is important to support such efforts outside one’s jurisdiction because meaningful transparency from platforms anywhere would lead to additional data and scrutiny elsewhere too.
Partnerships between integrity workers, regulators, and advocacy groups are essential for positive tech policy developments, and building capacity for speaking on technical matters from a position of authority would transform the horizon of possibility on policy deliberations.
With the unprecedented opportunity now, those advocating for a healthy social internet must build bridges between experienced tech practitioners and all stakeholders working toward a social internet where all can thrive.
Overview of the Institute
Like never before, people with direct experience building the world’s biggest platforms are stepping up to contribute to policy and advocacy efforts alongside those with first-hand experience of platforms’ harms. The Integrity Institute was founded to bring integrity professionals into community with one another and to build a bridge between their expertise and the general public. As welcome, two funders who sponsored the briefing conveyed their enthusiasm for the Institute’s mission to promote cross-sector collaboration between integrity workers, researchers, policymakers, and civil society organizations all advocating for a healthy social internet.
Following an introduction to the Integrity Institute’s mission and bridge-building model, the Institute’s research team presented four priority problems with social media:
Exposure to harmful content: Beyond users simply encountering hate-speech, mis- and dis-information, self-harm, and other types of harmful content shared by other users, the design of platforms itself can compound the harm through algorithmic amplification.
Addiction and problematic overuse: Aided by algorithmic amplification, platforms’ information feeds can perpetuate user addiction when poorly designed.
Bullying, harassment, and unwanted contact: As social platforms involve user-to-user interaction, ill-minded users can use platforms to attack others.
Privacy: Platforms collect an abundance of user information. While collecting user information is not necessarily problematic, the method and extent to which personal information is collected, used, and shared can impede privacy.
With all four, platform design and governance play a crucial role in determining the extent of harm. Any given platform’s algorithmic and UI design are all deliberate choices made by the company. Therefore, we need the right teams inside companies to have more influence and impact, and we need companies to make better decisions.
To achieve these outcomes, the goal of tech policy should be to incentivize companies to adopt best practices by making harmful practices costly and mandating changes when necessary. Three specific policy solutions can help incentivize companies to change their behavior:
Creating accountability via comprehensive and meaningful transparency
Mandating integrity best practices
Limiting practices known to be bad or risky
The Integrity Institute has provided detailed resources on each of these policy solutions through additional materials and interventions recommended during the briefing, including the implementation of parental controls and the limiting of engagement, notifications, recommendations, and ad targeting.
Bridge-building discussions
Following the overview, the Integrity Institute team and its partners shared successful interventions where direct practitioner expertise from integrity tech professionals strengthened tech policy and advocacy efforts across the globe. They also discussed recent examples of missed opportunities, where funders, advocates, and integrity professionals can take important lessons about how to better operationalize their collaborations in service for actionable, effective, and enforceable tech policies globally.
Organized into three bridge-building panels, tech practitioners and stakeholders shared their insights, culminating with reflections from funders. General themes, remarks, and recommendations raised can be summarized as follows:
Panel 1: Bridge-building for meaningful transparency from platforms
There is consensus amongst regulators, academics, platforms, and the public that there is a need for transparency from platforms. Yet, discerning what constitutes meaningful transparency and the appropriate level of policy intervention to achieve it is challenging.
An integrity professional and policy expert suggested that platform companies’ talk about transparency can be rather performative. For instance, Jack Dorsey’s resignation from Twitter voices a desire for transparency, but few companies voluntarily release meaningful data to the public of their own accord. Integrity workers are experts on platform design and governance and can define “meaningful transparency” concretely.
Several regulators outside the US and the EU shared how they consider the trade-offs between transparency, national security, data privacy, and human rights. In such considerations, access to independent on-platform expertise provided by organizations like the Integrity Institute’s membership and existing partnerships with advocates are invaluable.
Maintaining a healthy balance between preserving privacy and enabling content moderation and transparency on platforms is an especially thorny trade-off, where regulators can benefit from additional expertise and engagement.
Regulatory approach to tech policy has also evolved, with an understanding that the speed of technical development in the private sector often requires continual horizon scanning to elicit policy recommendations.
One regulator noted that predicting an intervention’s efficacy relies upon an understanding of platforms’ internal policies, governance, and dynamics.
Although the majority of attendees were US-based, one funder noted in their reflection that most regulatory advances in platform regulation are happening outside the US. Therefore, US-based funders should pay attention to regulatory trends globally and consider supporting the growing demand that non-US institutions have for expert input, especially when countries are often building on others’ efforts in crafting tech policy proposals.
Panel 2: Bridge-building for implementing meaningful transparency
One region with the most active momentum behind tech policymaking is the European Union, where the implementation details of the Digital Services Act are being sorted out. The Integrity Institute and its partners participated in the Code of Practice on Disinformation process in 2022, and they shared specific examples from that process as a successful bridge-building model where integrity professionals and their expertise could make civil society organizations’ advocacy more effective.
An academic expert shared their parallel challenges: on the one hand, it could be difficult to even begin research on platform activities, as researchers often have met pushback from companies when asking for data; on the other hand, it could be difficult to assess any given research’s impact on platforms’ behavior. For both challenges, robust communications between researchers, integrity professionals, and regulators and policymakers are absolutely essential, especially in contexts where direct engagement with platforms is limited or obstructed.
As example, integrity professionals could provide insights into what types of data researchers could push for access and whether platforms’ responses are reasonable. Having integrity professionals present among all stakeholders, including researchers and policymakers, also helps target academic research’s applicability for regulatory and policymaking endeavors.
Putting the bridge-building model from the academic expert into practice, a civil society advocate and several Institute members shared extensive information about the process of collaborating on the Code of Practice on Disinformation:
Given the scope and the ambition by European regulators are unprecedented, there is a huge technical gap that is immediately obvious. It could be extremely challenging for civil society organizations advocating for change to counter the imbalance in technical knowledge between civil society and platform companies.
Integrity professionals from the Institute helped to close this gap by providing technical expertise, and partnerships between civil society advocates, regulators and policymakers, and organizations like the Integrity Institute have helped create balanced proposals that are actionable and enforceable.
Overall, participants in this panel emphasized the importance of partnerships between integrity workers, regulators, and advocacy groups. As one funder summed up at the end of the panel, having an organization that can speak on technical matters from a position of authority transforms regulators and policymakers’ horizon of possibility on tech policy. As such, supporting integrity professionals and incorporating them into existing partnerships between civil society, regulators, and policymakers can have positive cross-cutting effects across a wide range of issue areas impacted by platform behavior.
Panel 3: Building-building for levers of influence outside of “public policy”
Outside of formal public policy, there are many other levers that could incentivize platforms to prioritize integrity.
Integrity professionals have some amount of power and autonomy within platforms to enact best practices, and resources from the Integrity Institute (such as best practices guides) help give external validation for integrity professionals to advocate for change within.
Direct engagement with companies can be fruitful as well, especially when companies are given public credit when they do enact positive change.
Original and open-sourced research, where companies are checked publicly on negative behavior, is the other side of the same coin.
In the current ecosystem, stakeholders are actively working to communicate and translate across boundaries, and integrity professionals are part of the effort as well. As one funder summed up, although policymakers, researchers, and advocates might find tech companies inaccessible and incomprehensible, adding integrity professionals to the mix is reducing the gap.
Conclusion
This could be global tech policy’s biggest year yet. At this very moment, several countries and regions have adopted or are working toward major regulatory and legislative frameworks (including provisions on risk assessments & mitigation, recommender systems, algorithmic audits, etc.). Whether these policy developments will result in actionable, effective, and enforceable regulations that are also constructive for social platform innovation remains to be seen. A global, multi-stakeholder effort right now, however, is our best bet to advocate for a healthy social internet.
To successfully leverage this opportunity, oversight bodies, researchers, and civil society organizations must understand the inner workings of platforms. Stakeholders such as policymakers lack expertise on how these companies and their tech works. Additionally, public-interest-minded workers and managers at these companies suffer from a combination of organizational constraints and collective action problems. As platforms are preparing for implementation and enforcement of these new legal frameworks, increased technical expertise and independent auditing capacity are essential as well.
There is an unprecedented opportunity now to influence corporate policy at platforms through movement building. Integrity workers equally need the support not just within their companies, but also from funders, stakeholders, and regulators. The Integrity Institute is working to develop the glue that brings them together. Throughout the briefing, many attendees expressed how rare it is to have different stakeholders across sectors and geographies speak to one another in one place. We are in a moment when we must build such bridges between experienced tech practitioners and stakeholders working toward a social internet where all can thrive.