Absolutely Interdisciplinary 2022 explores new solutions for a changing technological landscape

 

The Schwartz Reisman Institute’s academic conference Absolutely Interdisciplinary 2022 hosted eight panels featuring 30 presenters, with sessions offering innovative responses to the challenges of today’s technological landscape, including questions of data privacy, explainable AI, evolutionary approaches to system design, digital rights, and recommender algorithms.


Convening a wide range of researchers and thought leaders, the Schwartz Reisman Institute for Technology and Society at the University of Toronto hosted the second edition of its academic conference Absolutely Interdisciplinary from June 20–22, 2022. The three-day event showcased eight panels with 30 presenters, with sessions offering unique new solutions to many key challenges of today’s technological landscape.

The conference’s topics included reshaping the digital world in the service of social good and human rights, evolutionary approaches to system design, governance issues surrounding explainable AI and recommender algorithms, and the influence of these tools on society.

“What’s different about Absolutely Interdisciplinary is that we are trying to break out of the regular conference format,” noted SRI Associate Director Peter Loewen during the event’s opening remarks. “What we really want is for speakers to come together and have a conversation with each other that moves beyond disciplinary boundaries and invites people to share their knowledge and insights.”

Among the conference’s unique features is its structure: instead of grouping specialists with the same research background within a single panel, each session paired thinkers from different academic arenas who are both working on a common research question, leading to conversations that transcended a narrow disciplinary focus to generate broad new insights towards the topic at hand.

 
Screenshot of discussion between Lisa Austin, Aziz Z. Huq, Robert Seamans, Eric Horvitz, Pamela Snively.

Clockwise from top left: SRI Associate Director Lisa Austin moderates a discussion on the potentials for public data trusts to enable research for social good, with Aziz Z. Huq (University of Chicago), Robert Seamans (New York University), Pamela Snively (TELUS), and Eric Horvitz (Microsoft).

 

Re-orienting the digital landscape towards public good

In the conference’s first main session, “Redrawing data boundaries,” SRI Associate Director Lisa Austin discussed the viability of making data collected by private companies available for greater public access through data trusts, enabling researchers to leverage it for social good initiatives. Panelists discussed potential issues around data privacy and portability, with industry representatives Pamela Snively (TELUS) and Eric Horvitz (Microsoft) both responding approvingly to the concept. As Aziz Z. Huq (University of Chicago) noted, the question of who owns personal data is “remarkably unclear”—while companies exercise control over data they collect through technological tools, this does not equate to ownership. The session concluded with an agreement amongst panelists on the need for greater data literacy: how we conceive of data largely determines how we will use it, and frameworks must be better understood by the public and updated alongside the new ways data shapes and defines the world today.

A similar theme emerged from “Digital constitutionalism and the futures of digital governance,” a talk by SRI Faculty Fellow Anna Su on the importance of digital rights, which kicked off the conference’s third day. “Digital technologies don't just affect our digital lives,” observed Su, “but shape all our collective social systems.” Su argued that human rights can serve as a foundation for efforts towards a digital constitution, but this requires revising existing frameworks in three significant ways: first, we must apply and broaden our understandings of existing rights to the digital sphere; second, we must recognize a “new vocabulary” of rights unique to digital spaces and contexts; and third, we must understand rights as touchstones of a landscape of shared values such as equality, democracy, and the rule of law. “We need to take advantage of the emancipatory potential of this new moment,” concluded Su. “Technology may be evolving to dizzying levels, but human needs remain the same.”

 
Screenshot of discussion between Sheila McIlraith, Natasha Jaques, Jennifer Nagel.

Clockwise from top left: SRI Associate Director Sheila McIlraith, Natasha Jaques (Google Brain), and Jennifer Nagel (University of Toronto) discuss the potential for social learning to inform multi-agent reinforcement learning systems.

 

Evolutionary approaches towards multi-agent AI systems

In “Natural and artificial social learning,” SRI Associate Director Sheila McIlraith led an inspiring discussion between Jennifer Nagel (University of Toronto) and Natasha Jaques (Google Brain) on the potentials for social learning—a remarkable feature of natural intelligence across the animal kingdom—to inform and build better multi-agent AI systems. Nagel demonstrated how social learners gain knowledge from other agents, with humans engaging in highly complex forms of selective social learning from an early age to orient themselves in the world. While many hypotheses about the utility of social learning are logistically impossible or unethical to test, Nagel ventured that Jaques’ research into reinforcement learning (RL) systems that incorporate these principles makes this increasingly possible. Jaques described how she uses RL to develop agents with intrinsic social motivations by incorporating learning behaviour through reward design, and the potential for this approach in systems like self-driving cars, where agents must learn in sophisticated ways from others without necessarily copying them. Jaques demonstrated agents with this ability are better suited to adapting to new environments, opening up a potentially vast area of future AI development inspired by insights and intelligence from the natural world.

Inspirations from the natural world was also a central feature of a session led by SRI Research Lead Denis Walsh on “Collective agency in evolution and AI,” which featured Kate Larson (University of Waterloo) and Richard Watson (University of Southampton). Larson described different models of cooperation for collective systems of artificial agents, noting how the initial research questions that guide the development of these systems have major implications for their design and capabilities. While certain axioms and conditions provide incentives to cooperate, Larson proposed an evolutionary focus might reveal new and alternate solutions for self-organization that would guide future directions for research. Watson gave a sweeping account of evolutionary history that considered how individuals emerge through processes of organizational development, noting that while transitions between stages of evolution are fundamental to biology, social evolution theory is poorly equipped to assess this feature. “Evolutionary transitions in individuality are not just about putting things into new containers for selection or relatedness,” observed Watson. “A transition in individuality is driven by a change in the nature of the relationships between things.” In linking insights from evolutionary biology and AI system design, the session brought forward new ideas to rethink not only concrete problems in computer science, but the underlying assumptions that frame disciplinary approaches in general.

 
Screenshot of discussion between Phillipe-André Rodriguez, Finale Doshi-Velez, and Boris Babic.

Clockwise from top left: Phillipe-André Rodriguez (McMaster University), Finale Doshi-Velez (Harvard) and Boris Babic (University of Toronto) discuss the limitations and possibilities of explainable AI models.

 

New regulatory concepts for new technologies

A third major theme at Absolutely Interdisciplinary explored the need for new policy frameworks to address technologies entrusted to make important decisions that lack transparency around how they arrive at their outcomes. In “Explanation and justification in AI,” Phillipe-André Rodriguez (Global Affairs Canada) moderated a discussion between Finale Doshi-Velez (Harvard University) and Boris Babic (University of Toronto) on explainable AI and the limits of interpretable (or “white box”) versus partial-view (or “black box”) models. Doshi-Velez demonstrated that while we can understand the processes used by interpretable models, this becomes harder with partial-view models that utilize deep learning techniques with hidden layers, leading to a trade-off between the accuracy of these models against their lack of interpretability. Babic argued that the current paradigm of explainability is merely a “fools’ gold,” given that it is typically a post-hoc rationale fitted to the output of the system. Using the example of prescription medicine, Babic contended that we do not always require a full causal account of how a technique works, as long as the method is subjected to norms and regulations to ensure its outcome is adequately tested to be safe, fair, and equitable.

Finally, in the conference’s last session, Peter Loewen moderated a discussion between Jonathan Stray (Berkeley Center for Human-Compatible AI) and Taylor Owen (McGill) on “Building democratic social choice into recommender systems.” As the panelists observed, researchers are just beginning to assess the social and psychological effects of these powerful algorithms which are used to select personalized content for users, and the questions of who ought to hold authority to decide how these systems are managed remains open. Stray demonstrated recommenders are not a single algorithm optimizing for only one form of engagement, but form a contested space occupied by hundreds of different systems within a platform’s infrastructure. Recommenders are in essence voting systems, Stray noted, and the impacts of user decisions inform outcomes within not only a given platform, but also within a broader landscape of different content providers. Owen noted that aligning recommender systems with democratic outcomes necessitates an emphasis on regulatory tools to change how the systems are designed and developed. More transparency from platforms is required to assess their impacts, and legislation could enable access to multiple layers of data for users, researchers, and regulators. Both panelists described their own efforts towards developing participatory frameworks, such as the creation of “mini-publics” and citizens assemblies to assess the needs of potential regulation and user control over recommenders.

Expanding the analysis

As the discussions at Absolutely Interdisciplinary 2022 demonstrated, many of the urgent challenges at the intersection of technology and society can benefit from integrating insights from diverse fields to reframe debates through new research and solutions. Continuing its mission to help guide and enable these conversations, SRI will publish a series of articles over the coming months that will cover each session at the conference in greater depth, at which point recordings for each talk will be made available to the public.

 

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

2022 SRI Graduate Workshop explores “Technologies of Trust”

Next
Next

Schwartz Reisman Institute announces new fellowship recipients for 2022