How algorithms can strengthen democracy: Ariel Procaccia on designing citizens’ assemblies
Modern Western democracies are founded on the election of representatives—individuals who represent the political goals of their constituents. However, an alternative means of choosing political representatives through random selection may yield more efficient ways to make policy decisions. This idea forms the basis of sortition, which has in recent years been used to create citizens’ assemblies on topics ranging from constitutional issues to climate change.
While the concept can be traced back to Ancient Greece, sortition is not necessarily as simple as randomly drawing members of the populace to be part of an assembly. In his recent seminar at the Schwartz Reisman Institute for Technology and Society, Ariel Procaccia outlined novel developments in designing algorithms for sortition that meet key practical and theoretical requirements.
Procaccia is the Gordon McKay Professor of Computer Science at Harvard University, and works on a broad and dynamic set of problems related to AI, algorithms, economics, and society. His distinctions include the Social Choice and Welfare Prize (2020), a Guggenheim Fellowship (2018), the IJCAI Computers and Thought Award (2015), and a Sloan Research Fellowship (2015). To make his research accessible to the public, Procaccia has co-founded several not-for-profit websites, including Spliddit.org (with SRI Faculty Affiliate Nisarg Shah) and Panelot.org, and he regularly contributes opinion pieces.
Designing algorithms for fair and accurate representation
In his talk, Procaccia highlighted a fundamental challenge in the design of sortition processes: how can we construct a demographically-representative panel of citizens while giving everyone a fair chance to be selected? Procaccia begins by delineating the typical selection pipeline into three stages. First, a subset (“letter recipients”) of the population is probabilistically chosen to receive invitations to the panel. Then, a smaller subset of the recipients volunteer to enter the pool of candidates. Finally, the actual panel is chosen, again with some randomness, from the volunteer pool. Unfortunately, problems with non-representativeness often emerge when recipients decide to volunteer—for example, due to different self-selection biases between recipients with different demographics and education backgrounds.
The standard solution to this issue is therefore to impose quotas on different demographic subgroups when choosing panellists from the volunteer pool. One can then apply a simple greedy algorithm: for the most under-filled quota from the rarest demographic, find a volunteer who will help fill the quota. And yet, a different problem arises using this approach: many individuals often end up with no chance of being selected! This occurs because certain people, by virtue of the demographic groups in which they belong, have no chance of being sampled because other demographics take priority under the greedy scheme. So, using demographic quotas is typically unfair to individual citizens.
Procaccia’s work proposes an alternative approach. Instead of sampling different individuals to form a panel, we can shift focus towards predetermining the composition of different candidate panels from which we eventually randomly select the actual panel. In this way, we can require that candidate panels satisfy both constraints of representativeness (so that panels are approximately representative of the wider populace) and individual fairness (so that each person has some probability of being selected). To meet this second criterion, Procaccia draws on the leximin optimization problem from the literature on fair division. Intuitively, when we want to divide goods between individuals, leximin tries to maximize the payoff for the worst-off individual. Then, it breaks ties between different allocations based on how well the second worst-off individual fares, then how the third worst-off fares, and so on. Conceptually, leximin extends the Rawlsian maximin principle from the worst-off individual to all people receiving any allocation of goods.
In the case of sortition, using leximin to optimize selection probabilities in candidate panels yields very positive results. Across a dozen tasks requiring citizens’ assemblies, leximin gives everyone a positive probability of selection, whereas the greedy algorithm only does this in two tasks.
Nonetheless, as is often the case with complex optimization problems, solving sortition using these methods is computationally and practically difficult. Procaccia and his team have therefore been providing efficient implementations through Panelot.org from April 2020, which has since been used by the state of Michigan to select a 1000-member panel on COVID-19. In fact, he provides mathematical guarantees that one can shrink the number of candidate panels for efficiency while still retaining approximate individual fairness.
Finally, Procaccia also shows preliminary evidence that the entire sortition pipeline may be simplified to avoid self-selection issues in the first place. In particular, if letter recipients’ self-selection probabilities were estimable, one could construct a sampling method from the skewed volunteer pool to the panel that cancels out these biased self-selection probabilities. By using one such method, Procaccia demonstrates clear improvements over the greedy algorithm on a simulated example.
Novel computational solutions to age-old social issues
Procaccia’s research highlights an impactful way of improving our collective decision-making processes through novel computational solutions to the sortition problem. What makes his contributions especially meaningful is that computational decision-making systems often rely on rigid assumptions about human behaviours that may not materialize in practice, such as game-theoretical models. By illustrating the immediate improvement that his algorithms provide to both existing and emergent social problems requiring citizens’ assemblies, Procaccia shows the significance of a solutions-oriented approach.
How, and to what extent, can the use of computational techniques enable us to make better decisions according to our societal values? This is a fundamental question about the development of algorithms, platforms, and sociotechnical systems more broadly that heavily influences my own work as a computational social scientist. As a researcher who explores questions of computational social science, my work is often concerned with measuring how humans behave in existing technological contexts, such as spending time on online platforms or interacting with algorithmic risk assessments. Thus, while my endeavours are often descriptive, Procaccia’s research program highlights a complementary, more normative approach that is needed to effect alignment between human decision-making and core societal values like fairness and political legitimacy.
Furthermore, at a methodological level, the techniques Procaccia presents have implications for social science research on fairness. In the discussion that followed Procaccia’s talk, SRI Associate Director Peter Loewen discussed how sortition can be applied to survey science for government consultations, which may mandate fairness towards individual participants. These are concerns that translate even to applications in private industry as well. If an online platform were to survey its user base for evaluations of, for instance, their algorithmic recommendations, a sampling method that gives every person an non-zero chance of having their voices heard is intuitively a significant improvement over one that does not. There are many potential ways in which Procaccia’s work on sortition and social choice might translate to social scientific research more broadly, and his work towards harnessing the power of algorithms to strengthen democratic engagement provides others in the field with a powerful example of the potential of new technologies to promote human flourishing.
Want to learn more?
About the author
Lillio Mok is a PhD candidate in computer science under SRI Faculty Affiliate Ashton Anderson at the University of Toronto, and an SRI graduate fellow. He conducts research in the area of computational social science using methods from human-computer interaction, social data science, and applied machine learning. Currently, his work addresses value alignment problems in sociotechnical systems through the lens of measuring and evaluating how people behave online. Follow him on Twitter or LinkedIn.