Absolutely Interdisciplinary 2023 ignites new conversations and insights on AI research

 

How can researchers from different fields develop shared ways of understanding the impacts of artificial intelligence (AI)? At the Schwartz Reisman Institute’s annual conference, participants engaged in unique conversations regarding what AI can teach us about social systems, cognition, education, creativity, and more.


From June 20 to 22, 2023, the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto hosted the third annual edition of its academic conference, Absolutely Interdisciplinary. This year, the conference took place in person for the first time, with 23 speakers from a diverse array of fields—including computer science, psychology, law, economics, education, philosophy, media studies, and literature—gathering for nine sessions spread across three days.

“The recent impacts of generative AI tools have really emphasized how important it is to build spaces for interdisciplinary conversations,” observed SRI Director and Chair Gillian Hadfield. “Our goal with Absolutely Interdisciplinary is to foster new research agendas based in the creative interplay of diverse questions and framings as we explore the potentials of these new technologies.” 

From the implications of AI as a tool for exploring human values and cognitive processes, to its transformative impacts on the future of work and education, Absolutely Interdisciplinary highlighted the evolving landscape of AI research and how advanced technologies can offer new and profound ways to reshape our understanding of society.

Cognition, storytelling, and the future of intelligence

On June 21, the conference featured a keynote lecture by Blaise Agüera y Arcas, vice-president and fellow at Google Research, and a regular participant in developing cross-disciplinary dialogues about AI and ethics, fairness and bias, policy, and risk.

“We’re all going through a bit of existential angst because most of us believe that we have been the smartest things on earth for some time now,” he said. “I don’t think that will any longer be the case.”

 

In his keynote lecture, Blaise Agüera y Arcas of Google Research provided a broad and incisive survey of the history and potential future of AI development.

 

Exploring the historical connections between neuroscience, computer science, and cognitive science, Agüera y Arcas discussed the progress made in visual perception by neural networks in recent years, and delved into the concept of artificial general intelligence (AGI), including recent debates on whether AI might become sentient and pose a risk to humans.

“Some people take AGI to mean superintelligence, others take it to mean consciousness,” said Agüera y Arcas, who highlighted storytelling as an essential quality of human experience, and how the construction of personal narratives about ourselves and our past help to predict our future.

“We are the stories that we tell ourselves,” observed Agüera y Arcas. “And as we interact with people, we construct and edit that story over time.”

Agüera y Arcas described an experiment he and his team conducted in which they used large language models (LLMs) to generate stories, create quizzes based on those stories, and grade the quizzes. This process aimed to mirror how humans engage in social learning and pass knowledge on to each other.

Acknowledging a particular limitation of LLMs in their lack of long-term memory, Agüera y Arcas argued that if we could figure out how to “fix memories” in an AI system, “we’d get rid of the last barrier that I know of toward general AI that really is humanlike.”

Agüera y Arcas’s insights into the intersection of AI, human cognition, and storytelling raised crucial questions about the history and future of intelligence and the unique qualities of language, setting the stage for two days of sessions addressing related complex questions about AI and humanity.

AI, learning, and the role of education

In a session pairing psychology with computational neuroscience, Joel Leibo of Google DeepMind and SRI Faculty Fellow William Cunnigham of U of T’s Department of Psychology presented their ongoing work on modeling human social interactions using artificial agents to test social cognitive theory. Can we learn something useful about human behaviour and psychology by creating artificial agents that model the ways we interact and socialize with each other?

According to Leibo and Cunningham, yes. They showed that by simulating coordination games in multi-agent reinforcement learning, they could test the origins of very human things like in-group bias and coalition-building.

 

From left to right: Moderator Nicolas Papernot poses with panelists William Cunningham and Joel Leibo, who described how multi-agent reinforcement learning can be used to test aspects of social cognitive theory.

 

SRI Research Lead Ashton Anderson of U of T’s Department of Computer Science moderated another session on learning from a different perspective: the implications of generative AI tools like ChatGPT for educators and students.

Panelist Laren Bialystok, an associate professor at OISE and acting director of U of T’s Centre for Ethics, offered thought-provoking ideas on why we value the concept of “originality,” and discipline students who violate it by using ChatGPT to write assignments.

“What is the benchmark against which cheating emerges as a moral wrong or a pedagogical error?” asked Bialystok. “Is technology the enemy of originality? What about individual originality versus collective originality? We need to start sussing out what really matters to us in student learning and student assessment.”

Bialystok’s co-panelist, SRI Faculty Affiliate Paolo Granata of St. Michael’s College, demonstrated how he is incorporating generative AI tools into his teaching methods, emphasizing how critical thinking can be augmented, rather than stifled, through approaches that train students in AI literacy.

“Learning is a human process that can elevate our spirit,” concluded Granata.

Risk and reward: AI’s capabilities, behaviours, and harms

One of the more frequent fears cited about AI is that it will replace workers. In a session on AI and the future of work moderated by SRI Research Lead Avi Goldfarb, economist Daniel Rock of the University of Pennsylvania’s Wharton School shared his recent findings on which occupations are most likely to be impacted by large language models.

“People who process information, people who have knowledge as part of their work, are more exposed,” said Rock, who was careful to note that this exposure can be both “harmful or helpful.”

“One of the key things for economists to add to this conversation is equilibrium,” said Rock. “It’s not just about AI replacing workers; there’s complementary innovation to be done here, there’s supply and demand, there’s the question of whether making one part of work cheap makes another part of it very expensive. There is a lot more work to be done here.”

Rock’s co-panelist, SRI Faculty Affiliate Frank Rudzicz of U of T’s Department of Computer Science, discussed his experiences implementing machine learning tools into the healthcare sector. Rudzicz recounted how the initial response from workers was concern that such tools would replace them, but that ultimately, human workers in healthcare were found to be indispensable—and their work was in fact augmented by powerful technologies.

Rather than “AI will replace workers,” Rudzicz framed as a more likely outcome that “workers with AI will replace workers without AI.”

 

Panelists Richard Sutton (left) and Julia Haas (right) discussed whether Sutton’s reward hypothesis is a good model for understanding human behaviour in a session moderated by SRI Director Gillian Hadfield (centre).

 

In a session on the reward hypothesis, computer scientist Richard Sutton, chief scientific advisor and CIFAR AI Chair at the Alberta Machine Intelligence Institute, and philosopher of mind Julia Haas of DeepMind discussed how AI’s response to values and incentives can helpful for understanding human cognition and behaviour.

Is the reward hypothesis, developed by Sutton twenty years ago, a good model for understanding human behaviour? Haas tentatively declared yes, but points of disagreement between the panelists emerged as the conversation turned to whether the hypothesis can actually guide decision-making for individuals and societies.

New languages, new frameworks for understanding

Absolutely Interdisciplinary wrapped up with a session on AI and creativity led by SRI Faculty Fellow Avery Slater, and featured presentations by literary scholar N. Katherine Hayles, whose work has focused on relations between science, literature, and technology and who has frequently engaged with the relations between embodiment and data, and UK-based poet Polly Denny, who presented her experiments with text-generating AI systems that have yielded new forms of artistic collaboration.

In her talk, Hayles presented recent research on the implications of LLMs for writing, language, and the ways we conceptualize and understand the world, proposing a working definition of cognition as a process that interprets information within contexts that connect it with meaning. 

“Where there's life, there’s cognition. But computational media also have cognitive capabilities,” Hayles observed, contending that machines have their own worlds of experience that human values are enmeshed with through a symbiotic relation.

 

From left to right: Panelists Polly Denny, N. Katherine Hayles, and moderator Avery Slater discussed the impact of AI on creative expression and language.

 

On this note, Hayles’s conference-closing talk came full circle to Agüera y Arcas’s opening keynote. Where he highlighted the role of narration about the self as constitutive of what we might think of as AGI, Hayles also framed the question of whether AI is “alive” in similarly fundamental terms of how perception and meaning-making construct and define the environments in which we operate.

Among the conference’s most considerable highlights was an atmosphere of collaboration and open dialogue, as participants were encouraged to explore the intersections through which different approaches could be aligned through the event’s unique cross-disciplinary framing. Through conversations incorporating philosophy, history, science, ethics, and social issues, speakers and participants critically examined the role of AI in shaping our collective future.

By addressing many of the fundamental questions facing the field of AI research today, and challenging existing paradigms around the division of research into narrow disciplinary confines, Absolutely Interdisciplinary provided a platform for meaningful conversations filled with insights towards the ways we will approach AI in the years to come.

With photos by Jamie Napier.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Exploring user interaction challenges with large language models

Next
Next

Why Geoffrey Hinton is worried about the future of AI