Absolutely Interdisciplinary 2025 explores new frontiers in AI research

 
Nitarshan Rajkumar, Karina Vold, and Atoosa Kasirzadeh on stage at Absolutely Interdisciplinary 2025.

At SRI’s annual conference, participants discussed future directions and key challenges in artificial intelligence (AI) research, including the complexities of aligning advanced AI with human values and interdisciplinary perspectives on AI safety. On stage from left: Nitarshan Rajkumar, SRI research lead Karina Vold, and Atoosa Kasirzadeh.


As AI systems become more autonomous and deeply integrated into daily life, how do we govern their use and retain human control?

That question was at the heart of Absolutely Interdisciplinary 2025, the Schwartz Reisman Institute for Technology and Society’s (SRI) annual academic conference. Hosted in person at the Schwartz Reisman Innovation Campus on May 29, the event brought together researchers, policymakers, and technologists from across disciplines to examine the evolving landscape of AI safety, governance, and accountability.

In his opening remarks, SRI Director David Lie emphasized the importance of collaboration across fields.

“We need researchers from all fields to come together and discuss how technology is reshaping society. Its impact today is broader than ever, and understanding—and guiding—those effects requires diverse perspectives.”

Reflecting on his past interdisciplinary work at the intersection of computing, law, and public policy, Lie set the tone for the day’s discussions.

“I had a key interdisciplinary insight while collaborating with Lisa Austin in the Faculty of Law. Engineers and lawyers often seem at odds—engineers even hesitate to work with lawyers. But I realized they’re actually quite similar: both are problem-solvers—engineers use technology, lawyers use language. And I hope everyone at the conference today can make these kinds of connections.”

 

Governing the frontier

The first session, New Frontiers in AI Governance, moderated by philosopher of technology Karina Vold, explored the challenges of regulating powerful AI systems. Philosopher Atoosa Kasirzadeh (Carnegie Mellon University) and computer scientist Nitarshan Rajkumar (University of Cambridge) offered insights drawn from their policy and technical experience.

Kasirzadeh, a Schmidt Sciences AI2050 Early Career Fellow, highlighted the tensions between fostering innovation and mitigating risk. Rajkumar, co-founder of the UK’s AI Security Institute and AI Safety Summit and a key drafter of the EU’s General-Purpose AI Code of Practice, reflected on what Canada could learn from the UK’s efforts.

Their discussion revealed the complex entanglement of AI safety with questions of power, geopolitics, and institutional design.

“Centralization is an enormous issue,” said Rajkumar. “Seventy five percent of AI compute in the world today is based in the US, 15 percent is based in China… you want to make Canada the best place in the world to build AI data centres, but what is that going to mean when the UAE can spin up gigawatts and build up nuclear reactors far quicker than we can, and we’re starting from zero?”

 

When bigger isn’t better

In the morning keynote, The Slow Death of Scaling, Sara Hooker, VP of Research at Cohere and head of Cohere Labs, challenged a key assumption in modern machine learning: that scaling up computing power always improves models.

Delivered virtually and moderated by SRI Chair Roger Grosse, Hooker’s talk examined the limits of scale and cautioned against policy tools, like chip export controls, that rely on oversimplified views of the relationship between compute and harm.

“The idea that compute thresholds can serve as reliable proxies for danger is increasingly shaky,” she argued. “Policymaking based on this premise risks being both overbroad and ineffective.”

Her remarks called for evidence-based approaches to regulating emerging AI risks.

 

Autonomy and accountability

The afternoon panel, Navigating Autonomy and Accountability in AI Agents, explored ethical and legal challenges posed by AI systems that operate without direct human oversight.

Moderated by legal scholar Anna Su, the session featured Megan Ma (Stanford University) and Atrisha Sarkar (Western University), who examined the shifting dynamics of human-AI collaboration.

Sarkar, speaking virtually from Western, focused on the difficulty of assigning responsibility within complex systems. Ma, speaking virtually from Stanford, described how legal education is struggling to keep up with the reality of generative AI’s capabilities.

“What we’re noticing is it’s not just about outsourcing or giving discreet tasks to these [AI] agents; we’re seeing more and more the conceptualization of AI agents as teammates or colleagues,” said Ma. “They’re much more than just our tools.”

The discussion underscored the need for both legal reforms and new conceptual frameworks to ensure accountability.

 

Rethinking existential risk

The conference concluded with a keynote that reframed the conversation on AI risk. Rather than a sudden catastrophe, David Duvenaud, Schwartz Reisman Chair in Technology and Society and Canada CIFAR AI Chair at the Vector Institute, warned of a gradual erosion of human influence—what he called “gradual disempowerment.”

Moderated by AI safety expert Sheila McIlraith, Duvenaud’s talk described how incremental advances in AI could slowly shift control over critical systems, from markets to governance, without a clear tipping point.

“We might not notice the tipping point until it’s too late,” he warned. “We need multiple perspectives to understand technology's impact because it's going to affect everything. Right now, you know computer scientists have succeeded in designing AI that is going to effectively replace humans in almost every important domain. Dealing with this is going to require understanding our entire civilization.”

Audience members were left considering the unsettling possibility that existential risk may arrive not with a bang, but with a gradual fade.

 

Looking ahead

As the day ended with a closing reception, one message stood out: Absolutely Interdisciplinary 2025 was not just a forum for ideas—it was a call to action.

Whether through legal reform, scientific research, or new governance models, participants stressed the urgency of shaping AI’s future before it shapes us.

“I love Absolutely Interdisciplinary because it's a really good opportunity to meet people from across campus working on all sorts of problems,” said Lie. “I find it very energetic. The conversations I have here are among the best I have all year.”

With each passing year, the conference affirms its role as a vital space for examining the technologies remaking our world—and for imagining futures still within our grasp.


Want to learn more?


Browse stories by tag:

Related Posts

 
Next
Next

AI and trust: Security technologist Bruce Schneier explores governance in the age of machine agency