Building democracy into recommender systems will require new tools and frameworks

 
A photograph of a 3d shape casting two different shadows: one circular and one rectangular.

In a session at Absolutely Interdisciplinary 2022, SRI Associate Director Peter Loewen, Jonathan Stray (Berkeley Center for Human-Compatible AI), and Taylor Owen (Max Bell School of Public Policy, McGill University) discussed what methods and principles might be used to redesign the algorithms that decide what billions of people see in accordance with democratic values. Photo: Daniels Joffe, Unsplash.


The internet is drowning in content. News stories vie for our attention alongside vacation photos from friends, recommendations for TV shows, podcasts, memes, and viral videos. To help users navigate this vast sea of information, platforms increasingly rely on recommender systems to help filter, sort, and rank content through complex and sophisticated ways.

Recommender systems are machine learning-based algorithms that drive how users engage online, and are responsible for determining what content appears across a wide-range of digital platforms, including search engines, media providers, and social networks. While the designs of these systems are often proprietary and hidden from users, they are typically engineered based on metrics of how users behave and engage online.

In recent years, the all-encompassing nature of recommender systems, and concerns around their potential harms, have brought the governance of these algorithms into question. Should the design of recommenders be left to the corporations who build them? If other groups, such as users, were to have a say in the design of recommender systems, how might they be best consulted? What information ought to be available on how these algorithms work, and how might we bring them into better alignment with democratic principles?

In a session at Absolutely Interdisciplinary 2022, a conference held by the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto, SRI Associate Director Peter Loewen of the Munk School of Global Affairs and Public Policy led a discussion between Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI, and Taylor Owen, the Beaverbrook Chair in Media, Ethics, and Communication at McGill University’s Max Bell School of Public Policy and founding director of the Centre for Media, Technology, and Democracy, on the possibilities for democratizing the design of recommender systems. Across a wide-ranging conversation that considered both technical and policy-based approaches, the panel grappled with questions around free speech, social good, and whether it is possible for platforms to operate along the same democratic principles as governments.

“All recommenders systems are voting systems”

As Stray noted in his presentation, recommender systems are the largest-deployed artificial intelligence systems in existence today in terms of users, engaging billions of people worldwide on a daily basis.

Stray noted that concerns around the impacts of how these algorithms work varies between stakeholders. While users are most concerned with what appears in their feeds and how their data is being collected, media producers aim to successfully work within recommender systems in order to disseminate their own content. Platform operators, on the other hand, want to ensure that both users and creators continue to engage their services, and aim to continually refine and optimize the experiences they provide.

Jonathan Stray

Assessing various approaches to regulation, Stray noted that “top down” models involve governments imposing requirements for platforms, such as transparency or risk assessments. By contrast, forms of direct participation may be integrated into the design of some systems. As Stray observed, “all recommender systems are voting systems, in some sense.” While the simplest ranking systems aggregate positive and negative responses, more complex recommenders incorporate feedback loops based on user behaviour, which is processed by neural networks to develop more sophisticated ranking functions. However, direct participation methods are often too difficult for complex systems, such as those that drive the content that appears at the top of YouTube or Google News.

Drawing on his work on participatory recommender alignment, Stray contended that both platforms and governments have a biased view towards regulation, and a preferable method for improving the design of recommenders is to survey citizen groups, or “mini-publics,” to determine legitimate policies. However, assessing outcomes and determining causation from these systems remains challenging, as recommenders can have simultaneous positive and negative effects.

Stray also noted changes to recommender systems are constantly deployed by platforms, with thousands of experimental alterations taking place behind the scenes every year, and hundreds of optimization processes in play. “The actual process of optimization at the corporate level looks less like a single algorithm optimizing for a particular target objective, and more like a complicated negotiation between multiple teams who monitor different metrics,” Stray observed.

Challenges for policy approaches

What democratic values should be embedded in algorithms, and what protocols can support this from a policy perspective? As Owen noted, such debates “are too often disconnected from the technical conversations,” and there are few opportunities to bridge considerations from both a policy- and technically-informed lens.

Owen highlighted that a major shift towards the governance of large-scale platforms is currently underway, led by a growing recognition of potential harms at both the individual and societal level. This shift is characterized by a model that increasingly seeks to regulate the output of systems in advance, rather than to punish bad behaviour after the fact. As Owen observed, this places considerable focus on the system design of algorithms, and seeks to change the incentives that shape the design process itself.

Taylor Owen

Owen established three key ways that conversations around regulation are taking shape. First, policy-makers are investigating the effects that recommender systems are having on society. Owen agreed with Stray that these effects are often in deep tension with each other, noting that a limited access to data—with companies’ internal research mostly hidden from public view—is a major limitation. Increased transparency is therefore a major focus of current policy approaches, with specific considerations being made to what levels of data should be available to the general public, researchers, and regulatory bodies.

A second key question is who should have control over how recommender algorithms work. Describing his work in conducting citizens’ assemblies, Owen contended it is clear the public wants more ownership over their digital lives, including stronger data privacy regulations. However, a challenge in this regard is that some initiatives may have unintended adverse effects: granting users full control over their feeds, or insisting on real name identification, can have negative implications in addition to providing positive ones. 

The third category identified by Owen concerns accountability for recommender systems. Most governance work will happen at this level, and could invoke different models that put legal responsibilities on platforms to design algorithms in ways that consider known risks and mitigate against them appropriately. These requirements will lead to codes of practice that govern the design of algorithms, which will be developed by regulators in collaboration with civil society and academic researchers. However, what liabilities will be established for platforms that fail to meet such codes remains an open question.

On the freedom and constraints of online speech

A key theme of the ensuing discussion was constraints around free speech, and differences between forms of dangerous and hate speech. The panel agreed that defining hate speech can be challenging, and a focus on outcomes and harm reduction offers the most productive model for limitations, such as whether online speech acts lead to real-world violence, or include content that is clearly non-consensual or illegal. 

Owen noted speech laws may need to be determined internationally, and pointed to the need for system-level thinking to calibrate policies that minimize risk. However, he also highlighted that a core challenge around such a model is that experiences of harm are very real for people, and lead to an acute desire for policies that enact total effectiveness. While some contexts may require a solution that is only partially effective, this is far from easy to accept.

Stray observed that constraints around harmful speech can sometimes unintentionally amplify it—an outcome known as the Streisand effect—and that online suppression of some extremist groups has paradoxically served to further spread their narratives. Whether it is possible to effectively suppress harmful political orientations online is therefore unclear, Stray ventured.

A deeper point of concern raised by the panel was the extent to which online spaces interact with existing conflicts in society, and may even contribute to and exacerbate them. Are the structures of online spaces causing people to lose their capacity to engage empathetically with each other? Owen noted that while it is difficult to measure the broad health of social discourse, citizens’ assemblies are a useful tool: while people with differing perspectives may arrive assuming they will disagree based on past online interactions, they often end up reaching a reasonable middle ground. “This tells me something about the character of the debate we are having online,” observed Owen.

To this extent, Stray noted that insights from the realm of conflict studies can aid in shifting the discourse around recommender systems, from the prevention of harmful statements to enabling what he terms “better conflicts.” “Conflict is normal for societies,” observed Stray. “It’s how they change for the better.”

As the panel came to a close, it was clear that further extensive study of recommender systems will be essential to diagnosing the health of our society and our democratic norms. As Owen noted, governments have recognized the potential for democratic harm in these systems and are stepping in to enact regulation—the question is no longer whether such policies will be crafted, but how.

Watch the recording:


Browse stories by tag:

Related Posts

 
Previous
Previous

SRI Seminar Series returns for fall 2022 featuring talks on the ethics of technology

Next
Next

Anna Su explores digital constitutionalism and the futures of digital governance