Algorithms and the justification of power

 

In a recent SRI Seminar, philosopher Seth Lazar of Australian National University explored the implications of the widespread use of algorithms in public spaces, and the questions they raise for governance and power. SRI Graduate Fellow Morgan MacInnes reflects on Lazar’s presentation.


When is it justifiable for the state to govern social relations? To what extent should governments exercise power within the public sphere? These are perennial questions for political philosophers, and the subject of extensive debates.

Adding further complexity to questions of governance is the rise of an algorithmically ruled landscape, in which social relations are largely mediated through technology. For Seth Lazar, whose work explores the implications of new technologies for political philosophy, our prevailing norms that guide social conduct and legitimate governance are ill-equipped to handle this fundamentally different world.

Lazar is a professor of philosophy at Australian National University, where he is a principal investigator for the Machine Intelligence and Normative Theory Lab (MINT). He also serves as a research fellow at the Oxford Institute for Ethics in AI, and recently co-founded the Philosophy, AI and Society Network (PAIS), a group of scholars working to explore the moral, legal, and political implications of data and artificial intelligence (AI).

In a recent seminar at the Schwartz Reisman Institute for Technology and Society, Lazar probed the question of how advances in data analytics and machine intelligence impact power structures, including the responsibility to govern, and what it would take for this power to be wielded in a justifiable way.

 

In a recent SRI Seminar, Seth Lazar of Australian National University explored how prevailing norms of governance are ill-equipped to handle public spaces designed by algorithms. (Photo: ANU)

 

The physical city and the algorithmic city

Lazar teases out these issues with a thought experiment juxtaposing the “physical city” with the “algorithmic city.” The physical city is the one we are all well-acquainted with: in this setting, institutions like the workplace or family shape social relations, as the arenas in which people freely interact. Likewise, in the algorithmic city, social relations are formed within public spaces—however, these include “algorithmic markets” such as Amazon or Airbnb, “algorithmic public squares” in the form of Twitter and YouTube, and algorithmic cultural venues like Spotify and TikTok. What sets the algorithmic city apart is that its opportunities for social interaction are mediated by algorithms. Search functions, cultural recommendations, and content moderation heavily influence both who people can interact with within these spaces, and what types of interaction are possible.

Algorithms code certain rules into digital public spaces that enable people to do things they otherwise couldn’t, while making other actions outright impossible. For example, Facebook’s Horizon Worlds virtual reality program does not allow a user’s avatar to get within a certain distance of another person’s avatar. While the physical city has rules and regulations, people are free to decide to obey them or breach the rules and face consequences. The power of algorithmic intermediaries goes further: rather than meting out rewards and punishments for certain behaviour, algorithms can determine the limits of what behaviour is possible in the first place.

Since these algorithms are created by private entities, Lazar argues that “never before have so many people been subject to so much private authority.” The corporations which create these algorithms are effectively engaged in governance, and cannot escape the realities of governing, even if they would like to. This prompts the question of what it would take for the power wielded by algorithmic intermediaries to be justified.

The point of Lazar’s thought experiment is that while the algorithmic city differs from the more familiar physical city in important ways, our current ways of thinking about the governance of public spaces are all tailored to the model of the physical city. Rationales on justifiable use of power and authority fixate on the state, and lack insight regarding the newfound power of the forces that govern the “algorithmic city.” In response to this shortcoming, Lazar seeks to articulate guidelines for determining when the power to govern public spaces can be justified—principles that can be abstracted from the nature of any particular kind of power-wielding actor.

Power and self-determination

Lazar contends that it is not enough for power to be used to achieve the right purpose; it must also be used in the right way (procedural legitimacy), and by the right people (proper authority). His primary concern is that those who design algorithmic intermediaries have been granted the power to chart a course for society, undermining collective determination. For Lazar, ultimate authority must rest with “the people,” collectively, and a more limited set of individuals only have proper authority if the people have granted them the right to act on their behalf.

A possible objection to all of this is that none of these conditions are necessary to justify the power of algorithmic intermediaries, since people willingly consent to them: participation in digital platforms is voluntary. If someone disliked or objected to the way an algorithmic intermediary governed a certain digital platform, they could simply forego its use.

Lazar does not think this is plausible. Algorithms have become so essential to the conduct of daily life that non-participation is no longer a reasonable option for most people. Additionally, an individual’s decision to participate in the algorithmic city affects other as well. While some people may find Twitter too vitriolic and cacophonous to stomach, even if they steer clear of the platform they still have to deal with any society-wide polarization or hysteria which Twitter’s algorithmic intermediation might create.

At present, the realm of digital platforms is dominated by monopolies, but if the major digital platforms of today were to be broken up, some contend that competition between different platforms would allow people to “vote with their feet.” With viable exit options, people could enjoy the benefits of mediation while exercising a degree of influence over the rules governing their social interactions. Private entities would feel compelled to adopt modes of governance, or lose business to those who do. Here too, Lazar is unconvinced. He claims that if the undesirable actions of one corporation are justified due to the existence of others making preferable choices, this is tantamount to the latter subsidizing the bad behaviour of the former, which seems intuitively unjust.

By Lazar’s own admission, his project is in its early stages, and there are aspects of his arguments that require further consideration. One point stressed throughout his seminar was the notion that algorithmic intermediaries are the creation of private entities—while this is true in many cases, it does not address the tendency beyond Western countries towards a more state-directed approach to the creation and supervision of digital public spaces.

In a similar vein, Lazar invoked individual freedom and collective self-determination in explaining the need for algorithmic intermediation to be properly justified. Yet collective self-determination is not simply the sum of individual freedoms; often, these two principles come into conflict. One of the developments which concerns me most about digital public spaces and their algorithmic intermediaries is how they serve as an avenue for groups to enforce conformity on the individual. Perhaps the threat to individual freedom in the algorithmic city does not merely stem from Big Brother or big business, but normal people.

Want to learn more?


Morgan MacInnes

About the author

Morgan MacInnes is a PhD candidate in political science at the University of Toronto, a graduate fellow at the Schwartz Reisman Institute, a research affiliate at GovAI, and a research associate at the Centre for International Governance Innovation. His current research focuses on how technological change can alter the conduct of international relations, geopolitics, and statecraft.


Browse stories by tag:

Related Posts

 
Previous
Previous

Absolutely Interdisciplinary 2022: Speakers and sessions announced

Next
Next

How the evaluative nature of the mind might help in designing moral AI