Evolutionary biology offers new perspectives on designing AI
At this year’s Absolutely Interdisciplinary conference, Kate Larson, professor of computer science at the Cheriton School of Computer Science at the University of Waterloo, and Richard Watson, associate professor at the Institute for Life Sciences and Department of Computer Science at the University of Southampton, participated in a panel on evolutionary biology and AI moderated by SRI Research Lead Denis Walsh, a professor in the Department of Philosophy and the Institute for the History and Philosophy of Science and Technology at the University of Toronto.
The panel examined the concept of collective agency, and the question of when a collection of individuals becomes a collective agent. Dialogue between panelists was motivated by key questions around what insights the disciplines of computer science and evolutionary biology can provide to one another. Can computer science offer new evidence for biologists concerning the evolution of individual and collective agents? Similarly, can evolutionary theory offer guidance to computer scientists about the structure and design of individual and collective agency in AI?
Collective agents and multi-agent systems
Larson kicked off the panel by exploring differences between multi-agent models broadly applicable to computer science, including Marvin Minsky’s “society of mind,” coalitions, and teams. For Larson, an “agent” refers to an AI decision-maker that interacts with an environment, which may include other agents. The “society of mind” model captures systems of sub-agents contributing to an output at a system or agent level, while coalitions are sub-groups of agents working together toward a goal, and teams are coalitions that act for the good of the group as a whole, sometimes even at the expense of the optimal success of individuals within the team.
Larson pointed to examples of multi-agent systems in computer science, such as Horde architecture, and the principle of separation of concern. Horde architecture consists of many independent reinforcement learning sub-agents, each with their own policy and reward function, organized together and working in concert. The principle of separation of concern is a design principle that, as the name suggests, involves the modularization of “concerns” (which can be broadly interpreted) in the design of the program. As Larson observed, these types of systems must be highly engineered: what is modularized and how must be deliberately decided by the creators of the system. Especially when considering the diversity of tasks for which a program might be developed, it is an open question how best to design such systems. Thus, Larson asked, could a model informed by evolutionary biology be able to inform the question of how best to approach the structure and organization of these kinds of multi-agent systems?
“could a model informed by evolutionary biology be able to inform the question of how best to approach the structure and organization of multi-agent systems?”
Self-organizing multi-agent systems: Coalitions and teams
The potential influence of evolutionary models in the design of multi-agent systems becomes even more compelling in the case of self-organizing multi-agent systems. In game theory, coalitions are sometimes the best strategy for optimizing for a particular outcome. While coalitions might be deliberately formed with a particular structure and organization, they can also be self-organized—perhaps in ways that are akin to multi-agent coalitions in nature. Larson posited two key questions behind the generation of coalition structure, concerning how coalitions should be formed, and why.
While answers to the first question might come in the form of search problems for optimal structure, the case of self-organization invites further insight from evolutionary theory. In fact, answers to the first question in the self-organizing case might be informed by answers to the second question of how: the motivations, incentives, and goals for forming a coalition might constrain and/or motivate the types of coalitions formed. Here, Larson suggests, there may be insight to be gleaned from evolutionary theory. What does evolutionary theory say, if anything, about the formation of coalitions of agents? Why do they form? How are they organized? What pressures, incentives, rewards, or goals impinge on their structure and organization?
Similar questions can be posed about “teams.” Teams are distinguished by what might loosely be called “altruism,” in the sense that team members may act in ways for the good of the group that are potentially at their own individual expense. The “why” and “how” of team formation in the natural world might lend insights into the design of team-based multi-agent AI systems by offering potential solutions to questions regarding whether agents can learn to form teams, and when teams might be an optimal solution.
Evolution and individuality
While Larson’s distinctions offered a neat clarity regarding the types of multi-agent system in AI design, Watson demonstrated through his presentation that evolutionary biology paints a radically less clear-cut image of individual agents, collections of agents, and collective agency.
Watson began by exploring how the concept of an “individual” is fraught in biology. While we can conceptually distinguish between an individual object (e.g. a wooden block), a collection of individual objects (e.g. many wooden blocks), and a collective individual object (e.g. a tower made of wooden blocks), bringing these distinctions to bear on organisms, especially as they are understood on an evolutionary timescale, is a messy business. Watson supplied examples which challenge a neat delineation between collections of individuals and collective individuals: colonies of aspen trees, flat worms, genetic chimeras. Are these organisms one individual, or many? This question becomes even harder to answer on an evolutionary timescale.
Considering the case of a single-celled organism evolving into a multi-celled organism, Watson asked: when does a collection of solitary individuals transition to a cooperative group? When does a cooperating group of individuals transition into a new individual?
These questions capture a fundamental insight: on an evolutionary scale individuality is not fixed, and all individuals are made up of parts that used to be individuals themselves. From this observation, Watson characterizes evolution as fundamentally about individuals, noting that while adaptations to the environment are selected at the population-level, it is through this process that new classes of individuals come about. As Watson emphasizes, transitions in individuality are the fundamental process of creative adaptation in evolution.
So, we can see that individuality in biology is not fixed, and that in some ways, evolution is a process of producing individuals. Not only that, but clearly the question of the conditions and motivations, relationships and processes of the transitions is of considerable importance. How can we model organisms with greater clarity in light of these observations? Can the tools of computer science lend insight into modelling this picture of evolution, as a series of transitions in individuality?
The paradox of individuality
Part of modelling evolution-as-individual-transition means answering further questions: What makes individuals cooperate in a group? What motivates the transition from collection of individuals to a new unit? These questions pose a challenge to evolutionary theorizing, which Watson calls “the paradox of individuality.” The transition from a collection of individuals to a new unitary individual cannot be explained by selection at the level of individual parts, because it is not in the self-interest of those individual parts to be dissolved in the formation of the new individual. Yet the transition cannot be explained through selection at the level of the new collective individual either, because its success can only be established after the formation of the new unit—it presupposes that which requires explaining.
Between this paradox, and the non-fixedness of individuality in biology, Watson suggests at least one picture of organisms is wrong: the one in which organisms are thought of as fixed individuals with their own goals, made up of parts that do not have agency. Without a clear means of delineating between a collection of individuals and a collective individual, ascriptions of agency are left adrift.
Drawing together the strands of the panel together, Watson went on to further suggest that what is wrong about organisms could also be wrong about artificial agents. If this is so, then it could mean that the proper locus of scrutiny for artificial agents, as with biological organisms, is in transitions of individuality. Thus, in answer to questions posed by Larson, Watson poses more questions in reply.
Watch the full session:
About the author
Jessie Hall is a PhD candidate at the Institute for the History and Philosophy of Science and Technology at the University of Toronto, and a 2021–22 Schwartz Reisman Graduate Fellow. Her research focuses on what is most aptly described as philosophy of computing, situated at the intersection of philosophy of mathematics, philosophy of language, and philosophy of science. Her dissertation focuses on what it means to call a system “computational”—tracing the influences of mathematical (Turing and other) computability, functionalism, and various stripes of reductionism, on conceptions of physical computational systems, brains as computing systems, and “abstract” computing.