Regulatory gaps and democratic oversight: On AI and self-regulation

 
Gavel on colourful background.

Regulatory gaps create economic and political incentives for the companies developing and deploying artificial intelligence to create their own set of rules, writes SRI Research Assistant Alyssa Wong. Below, Wong explores the benefits and drawbacks of self-regulation, and highlights the ultimate need for democratic oversight to ensure accountability, transparency, and consideration of public interests.


Governments around the world have begun proposing legislation, guidance, and regulation to determine proper use of AI. For example, the European Union’s AI Act is on track to become the world’s first AI legislation, while other countries like Canada, the United States, and the United Kingdom are pursuing their own laws. Meanwhile, 46 countries have voluntarily adopted the Organization for Economic Co-operation and Development’s AI Principles, which are non-binding but highly influential. At the same time, states are seeking to strike a balance between the goals of ensuring safe AI development and encouraging innovation. 

The rapid pace of AI innovation, however, is at odds with the comparatively slow pace of policy-making. While sometimes caused by inefficiencies or lack of specialized knowledge, policy development also takes longer as a result of vital democratic oversight. Among other benefits, norms and procedural requirements embedded in democratic systems ensure lawmakers hear a wide variety of perspectives and remain accountable to their constituents.

Self-regulation in the tech industry

These regulatory gaps create economic and political incentives for the companies developing and deploying these technologies to create their own set of rules. For example, Google has committed to seven AI principles and promised to not deploy AI in four key areas, and its subsidiary, DeepMind, has multiple ethics initiatives. Microsoft has several initiatives under its Responsible AI banner, including adopting principles, encouraging partners to do the same, and publishing standards. Meta’s Oversight Board reviews content moderation decisions on Facebook and Instagram. The four aforementioned companies, along with Amazon and IBM, founded Partnership on AI, a not-for-profit coalition that works to “advance responsible governance and best practices in AI.” Microsoft, Google, OpenAI, and Anthropic founded an industry body that will strive to promote “safe and responsible development of frontier AI systems.” Each of these companies has also engaged in conversations with politicians or standards organizations to help guide public regulatory efforts.

While self-regulation efforts fill regulatory gaps left by the public sector and provide governments with industry and technical expertise, they do so in potentially self-interested ways that lack democratic oversight, making it possible to leave negative lasting legacies.

First, the mechanisms of self-regulation can become the foundation for government-made law despite being void of democratic considerations. Industry-developed principles or rules sometimes become the de facto norm in a given industry, but governments may turn this “soft law” into hard law. There are benefits of soft law—it is typically easier and quicker to develop than hard law and it can adapt to changes more quickly—but encoding it in legislation or regulation risks baking in rules that initially prioritized corporate goals over the public good. The risk of taking a direct path from self-regulation to hard law is exacerbated in a sector such as AI, in which policy-makers rarely have relevant skills or expertise in such a highly technical and dynamic field. Such a situation can create a dependent relationship between government and industry, pressuring law-makers to step back and leave decision-making to industry.

Second, even where some regulation—or at least the possibility of regulation—does exist, self-regulation can occur regionally when corporations engage in “regulator shopping,” the act of choosing a particular jurisdiction in which to operate so as to face more favourable treatment than elsewhere. If governments observe such movements, a race to the bottom may ensue, in which they loosen regulations to attract investment. This also means that companies may export corporate values in AI governance to locations where there are few alternatives to their technology and services. While these exports may reflect otherwise internationally accepted regulation, they risk undermining the legitimacy of locally relevant and developed AI regulation.

Self-regulation: Agility, criticisms, and citizen outcomes

Self-regulation is not inherently bad. Such initiatives are often undertaken with the stated goals of improving transparency, accountability, and sustainability and preventing stakeholder harm. Unconstrained by bureaucracy, self-regulation has the potential to be more agile and creative. For example, Meta’s Oversight Board was developed quickly, can issue decisions quickly with wide discretion, and is not bound by geographic boundaries or limited to a single set of country-specific laws.

At the same time, it is easy to see how some features of the Oversight Board attract criticism. Although technically legally independent from its creator, the Board receives funding from Meta, bringing into question its independence and long-term viability. Uncertainty about how to characterize the Oversight Board—it has been compared to judicial courts, quasi-judicial bodies, administrative bodies, and international human rights tribunals—makes it difficult to establish standards by which its performance can be measured. Such analogies also risk overlooking fundamental differences between the corporate Oversight Board and democratic bodies. Additionally, some view the Board as merely an effort to improve Meta’s public relations.

These worries can be applied to self-regulation generally, which has been criticized as a front to avoid sanctions associated with traditional regulation or a way to improve public image without actually changing industry practices. Self-regulation faces concerns about transparency and accountability to the public, as well as a lack of control by governments. Moreover, self-regulation is often led by the largest market players, which are most able to devote resources to private research and advocacy. This can lead to a select group of private interests dominating the conversation while focusing on their preferred regulatory outcomes.

Self-regulation specific to AI: How does it work?

Self-regulation is nothing new and appears in many industries with varying levels of success. Certain professions, like lawyers and doctors, effectively self-regulate in many countries and maintain high levels of professionalism and ethics. To lesser success, social media regulation has been largely left to platforms until recently, leading to criticism for enabling the proliferation of harmful or misleading content. The ostensible failure of self-regulation can be seen in oil-rich countries like Nigeria where, prior to reform, weak regulations and enforcement allowed companies to essentially self-regulate while enabling human rights and environmental abuses.

But AI is different from other industries. Its ubiquity alone makes it unique, not to mention its disruptive nature and transformative potential. AI is being incorporated into consumer products like search engines and dating apps. It is in vital industries like healthcare and is consistently used in workplaces. AI is so commonplace that decisions about AI regulation necessarily involve contested normative ethical questions. Although terms like “ethical,” “fair,” and “responsible” are used, these terms are poorly defined by the public sector, allowing self-regulatory efforts to craft the conversation. Clear understandings of normative principles are necessary to ensure that regulations are enforceable—and even more so when those principles must be applied universally.

Given the nature of AI, it is vital that democratic processes and participation are part of the regulatory process. Concentrating regulatory power in industry risks undermining the legitimacy of resulting rules due to a lack of democratic oversight. Citizens may not have their interests represented because they may not be invited to join the conversation or may not have the power to influence the dialogue. Concentrating regulatory power in industry may also undermine the democratic process itself by pressuring lawmakers, shutting out other stakeholders, and moving the process out of the public eye. Even with increased transparency efforts, self-regulation risks fostering a lack of public understanding of regulation created by persons who are not directly accountable to the public. 

AI has the potential to fundamentally change industries, lives, and regulation itself. The recent meteoric rise of generative AI has pushed the debate about regulation to the forefront of public consciousness. This is a unique time, one in which it is critical to consider and push for democratic processes in AI regulation. To avoid the pitfalls of past self-regulatory efforts, democratic oversight is needed to ensure that these fundamental changes take place with sufficient accountability measures, transparency, and consideration of public interests.

Want to learn more?


About the author

Alyssa Wong is a JD candidate at the University of Toronto Faculty of Law and a research assistant at the Schwartz Reisman Institute for Technology and Society. She is interested in the law and governance of technology, with a particular interest in privacy and data governance.


Browse stories by tag:

Related Posts

 
Previous
Previous

Transforming diabetes care: SRI researchers secure $900K grant for AI prediction and prevention network

Next
Next

What is the future of AI alignment?