New ideas and connections as Absolutely Interdisciplinary takes off

 

Held virtually for the inaugural edition, Absolutely Interdisciplinary 2021 brought together 13 speakers from across three continents, and over 270 participants from around the world, to explore the theme of “Human and Machine Normativity: New Connections.” The conference featured researchers working on similar questions from a variety of disciplines to map out new terrain for thinking about human and machine normativity.


Absolutely Interdisciplinary 2021 has come to a close. The newly formed annual event hosted by the Schwartz Reisman Institute for Technology and Society (SRI) took place from November 4-6, 2021.

Held virtually for the inaugural edition, Absolutely Interdisciplinary brought together 13 speakers from across three continents, and over 270 participants from around the world, to explore the theme of “Human and Machine Normativity: New Connections.” 

“A deeper theoretical understanding of normativity is key to achieving the goal of building artificial intelligence that is aligned with human values,” observed SRI Director and Chair Gillian Hadfield in an introductory session. Normativity is a key component of Hadfield’s research, which investigates how rules developed by societies to aid cooperation are crucial not only for the evolution of human intelligence, but machine intelligence as well.

“Our goal for this conference was to map out new terrain for thinking about human and machine normativity in a deeply interdisciplinary way.”

“Our goal for this conference was to map out new terrain for thinking about human and machine normativity in a deeply interdisciplinary way,” notes Hadfield. “I’ve just been blown away by the generosity and connections that we’ve seen across all the sessions that took place. By bringing together researchers working on similar questions from different backgrounds, we can clearly see how valuable these conversations are for developing new approaches towards the study of AI.”

Utopian prelude

A graduate workshop centered around the theme of techno-utopias served as a prelude to the opening day of the conference. Literature, healthcare, blockchain, labour commodification, data governance, language learning models, and ethics were all covered across three remarkable sessions. Speakers engaged with parallels and asymmetries between science fiction worlds and technological realities, data privacy and the labour behind digital platforms, and concerns around the ethics of technical systems and the ways we understand technology.

Eight headshots of featured speakers.

Absolutely Interdisciplinary 2021 guest panelists (clockwise from top left): Jeff Clune, Vincent Conitzer, Deborah M. Gordon, Moritz Hardt, Johanna Thoma, Deirdre Mulligan, Sarah Mathew, Joel Z. Leibo.

Day 1: Developing new models of understanding

On Day 1, attendees explored the significance of biological models for AI in “Social Organisms and Social AI,” featuring guest panelists Jeff Clune (OpenAI, University of British Columbia) and Deborah Gordon (Stanford University).

“We have no idea really, or very little idea, of how nature produces the complexity explosion that we see,” observed Clune, describing his research into evolutionary systems via generative algorithms. “It's almost like nature can't help but produce an explosion of diversity and complexity… The question that wakes me up every morning is, what are the key ingredients from the natural world that enable this to happen?”

The links between Clune’s research on evolutionary adaptation and Gordon’s work in the field of biology offered many remarkable synergies and insights. Gordon’s studies of the ecology of collective behaviour—developed through her extensive field research documenting ant colonies in varying environments—referred to Clune’s work on modularity as a significant principle for structuring robust and effective networks. Gordon’s data demonstrate the real-world significance of the principles outlined by Clune; as she noted, “In biological evolution, it’s always about the relationship between participants in the world around them that generates variation and selection.”

Day 1’s second session explored “Fairness in Machine Learning,” with speakers Moritz Hardt and Deirdre Mulligan, both professors at the University of California, Berkeley. In their presentations, Hardt and Mulligan considered how to develop methods that encompass broader notions of fairness, and what historical and cultural inequities—including racial, gender, and economic differences—stand in the way of the application of these principles in machine learning.

BMO Performers-in-Residence perform Fresh Bard.

The BMO Performers-in-Residence performing “Fresh Bard.” L-R: Ryan Cunningham, Maev Beaty, Sébastien Heins.

Hamlet meets GPT-2

To bring the opening day to a close, Canadian new media artist David Rokeby, director of the University of Toronto’s BMO Lab, and the Lab’s Performers-in-Residence held a unique performance in which the classically-trained actors performed the output of a GPT-2 model trained on the plays of William Shakespeare. The result was not only a highly entertaining improvised play that veered equally into the absurd and the profound, but also an illuminating talk from the artists, whose engagement with new technologies represented yet another facet of the conference’s interdisciplinary scope.

“We are interested in taking AI models into real space and giving performers models to engage with in real time,” Rokeby observed. “By fine-tuning our GPT-2 model on a range of different theatrical sources, including the works of Shakespeare, we’ve played around with them—not in an attempt to try to create really good theater, but rather in an attempt to understand the output of these systems, to take them seriously, to ride the language, and to think about what it means to interpret them.”

Day 2: Building foundations for the future

Saturday’s session began with a morning panel comprising members of the Cooperative AI Foundation (CAIF) and the Collective Intelligence Journal—two exciting initiatives opening new funding and publication opportunities for interdisciplinary teams studying cooperation and collective intelligence.

“The vision for Collective Intelligence rests on the often-neglected observation that all biological systems and social systems are collective systems with many interacting parts, each of which has a somewhat different window on the world, and this is a fact that fundamentally unites them,” explained Jessica Flack, chief editor of the new transdisciplinary journal and a professor at the Santa Fe Institute.

Echoing the theme of meaningful cooperation, Microsoft’s Chief Scientific Officer Eric Horvitz told attendees, “The deeply social nature of humans sits at the foundation of our cognitive abilities and explains our accomplishments as a species, and this gets me excited about thinking about the role of AI, and the role of cooperation in AI moving forward.”

“There’s a great opportunity to pull together and be a centripetal force by supercharging funding—helping to coordinate efforts around AI methods aimed at coordination, cooperation, and complementarity. I think that this kind of a foundation with its funding and goals can do that,” said Horvitz, who sits on the Cooperative AI Foundation’s Board of Trustees. 

Afternoon sessions on “Cooperative Intelligence” and “Computational Ethics” brought together the disciplines of anthropology, computer science, economics, and philosophy. Sarah Mathew, an associate professor at the School of Human Evolution and Social Change at Arizona State University, shared her research on the Turkana pastoralists of Kenya and explored normativity in a decentralized cultural system. Research scientist Joel Z. Leibo of DeepMind provided a counterpoint to Mathew’s research through his investigations of the minimal components required for the emergence of a human-like normative system in AI.

“What are the minimal components?” asked Leibo. “I think it’s a really important question because it lets us take a reverse engineering perspective, and that's how I see the goal of my group at DeepMind. We're trying to reverse engineer human social intelligence and cultural complexity.”

Zoom screenshot of four conference panelists

Jennifer Nagel, Johanna Thoma, Vincent Conitzer, and Gillian Hadfield participate in the “Computational Ethics” panel.

The conference’s final session contemplated what new ethical problems AI ushers forth. Speakers Vincent Conitzer (Duke University) and Johanna Thoma (London School of Economics) explored the question from a philosophical perspective that invoked both political and technical dimensions, considering real-world applications such as self-driving vehicles.

In her closing remarks, Hadfield reaffirmed SRI’s commitment to interdisciplinary collaboration. “During this conference, everyone presented their thinking in ways that were accessible to people in other disciplines, and I think that’s our real challenge. We’re all clearly on the same journey of thinking about these transformative technologies, and the critical role of our normative systems. I want the Schwartz Reisman Institute to be able to play the role of providing that infrastructure of connecting people to support collaboration.”

Want to learn more?


Browse all stories by tag:

Related Posts

 
Previous
Previous

Schwartz Reisman Institute announces inaugural Advisory Board

Next
Next

We’re hiring! Join SRI as a postdoctoral fellow in computational behavioural modeling and analysis