Back to All Events

Absolutely Interdisciplinary 2021 | Human and Machine Normativity: New Connections


The Schwartz Reisman Institute’s inaugural academic conference, Absolutely Interdisciplinary, will take place virtually from November 4-6, 2021. The event begins with a one-day graduate workshop, “Views on Techno-Utopia,” followed by four scheduled sessions across two days, with opportunities for interaction and socialization in between these events.

About Absolutely Interdisciplinary

Complex new technologies like AI are advancing at a furious pace across all sectors and industries. It’s imperative that we understand the capacities and limitations of these new systems, but not only from a technical perspective. Absolutely Interdisciplinary will bring researchers together with a commitment to building new, interdisciplinary approaches to advance our understanding of how to meet the challenge of ensuring AI and other powerful technologies promote human well-being.

2021 Theme: Human and Machine Normativity

This year, Absolutely Interdisciplinary takes place under the theme of “Human and Machine Normativity: New Connections.”

One of the key challenges in AI ethics is the alignment problem—ensuring technologies are aligned with our values and serve the common good. Simultaneously, cutting-edge research in the humanities and social sciences is shedding light on the complexity of our shared systems of values and norms and how they evolve, are maintained, and shape our behaviour. Often AI researchers and those working on the nature of norms in other disciplines are approaching the same problem from different angles. This conference will identify and bring into dialogue these researchers and forge new interdisciplinary connections to shed light on these important questions.

Humans are a fundamentally normative species, with complex cognitive and social systems for shaping behaviour to implement collectively-determined values and norms to support cooperation. Norms in this sense refer not to what most people actually do, but rather to what people should do: the ubiquitous formal and informal prescriptive rules of behaviour—everything from the seemingly arbitrary, such as what clothing to wear to a funeral, to the clearly important, such as avoiding injury or harm to others.

Building AI systems that are robustly aligned with human values requires deep understandings of how these normative systems work. At the same time, advances in AI present unique opportunities to investigate and test what capacities contribute to our ability to build, maintain, and abide by norms. 

Absolutely Interdisciplinary will foster the interdisciplinary conversations needed to map the connections between human and AI normativity. Participants will contribute to and learn about emerging areas of research and new questions to explore. Each session will pair researchers from different disciplines to address a common question, and then facilitate a group discussion. By identifying people working on similar questions from different perspectives, we will foster conversations that develop the interdisciplinary approaches and research questions needed to understand how AI can be made to align with the full range of diverse human normative systems.


Absolutely_Interdisciplinary_speakers_2.jpg

Guest speakers

  • Jeff Clune, research team leader at OpenAI and an associate professor of computer science at the University of British Columbia. Clune’s work focuses on deep learning, evolving neural networks, and robotics.

  • Vincent Conitzer, Kimberly J. Jenkins Distinguished University Professor of New Technologies and professor of computer science, economics, and  philosophy at Duke University. Conitzer’s work focuses on AI’s objectives, game theory, and ethics.

  • Deborah Gordon, professor of biology at Stanford University. Gordon studies how ant colonies work without central control using networks of simple interactions, and how these networks evolve in relation to changing environments.

  • Mortiz Hardt, assistant professor of electrical engineering and computer sciences at the University of California, Berkeley. Hardt’s research investigates the reliability, validity, and societal impact of algorithms and machine learning.

  • Joel Z. Leibo, research scientist at DeepMind and research affiliate with the McGovern Institute for Brain Research at MIT. Leibo investigates and evaluates deep reinforcement learning agents and their performance of complex cognitive tasks like cooperation.

  • Sarah Mathew, associate professor at the School of Human Evolution and Social Change at Arizona State University. Mathew investigates how and why humans cooperate, and the evolution of our unique form of cooperation.

  • Deirdre Mulligan, professor in the School of Information at UC Berkeley. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems.

  • Johanna Thoma, associate professor at the Department of Philosophy, Logic and Scientific Method at the London School of Economics. Thoma’s work sits at the intersection of philosophy, economics, and public policy, and includes practical rationality, decision theory, ethics, public policy evaluation, economic methodology, and the application of economic methods to philosophical problems.


The Schwartz Reisman Institute for Technology and Society strives towards inclusion and equity. If you are experiencing financial constraints and wish to apply for free admission to the conference, please contact Events Coordinator Jackelyn Ho at jackelyn.ho@utoronto.ca.

 
 
Previous
Previous
November 3

SRI Seminar Series: David Duvenaud, “Explaining decisions by generating counterfactuals”

Next
Next
November 17

SRI Seminar Series: Chris Maddison, “The future of representation learning”