
Absolutely Interdisciplinary returns this spring to explore new frontiers in AI research
The Schwartz Reisman Institute’s annual academic conference Absolutely Interdisciplinary returns for 2025 to explore interdisciplinary approaches to AI governance, risk and safety.
What’s Next After AIDA?
In the wake of AIDA’s death and with a federal election on the horizon, a key question has emerged: what’s next for Canada after AIDA?
Unequal outcomes: Tackling bias in clinical AI models
A new study by SRI Graduate Affiliate Michael Colacci sheds light on the frequency of biased outcomes when machine learning algorithms are used in healthcare contexts, advocating for more comprehensive and standardized approaches to evaluating bias in clinical AI.
Safeguarding the future: Evaluating sabotage risks in powerful AI systems
As AI systems grow more powerful, ensuring their safe development is critical. A recent paper led by David Duvenaud with contributions from Roger Grosse introduces new methods to evaluate AI sabotage risks, providing insights into preventing advanced models from undermining oversight, masking harmful behaviors, or disrupting human decision-making.
New cohort of SRI faculty affiliates and postdocs announced for 2025
The Schwartz Reisman Institute for Technology and Society (SRI) is thrilled to welcome eight new faculty affiliates and three new postdoctoral fellows to its vibrant research community.
Upcoming SRI Seminars showcase new insights on cutting-edge AI research
The SRI Seminar Series returns for 2025 with leading experts exploring AI’s impacts from a wide range of disciplines, including computer science, psychology, law, philosophy, and communication.
Information about our world: SRI/BKC workshop explores issues in access to platform data
What kinds of solutions should we consider for gaining access to data, and which purposes can justify this access? These and related questions were the topic of an event co-hosted by SRI and the Berkman Klein Center’s Institute for Rebooting Social Media at Harvard University coordinated by Lisa Austin.
Humans and LLMs: Partners in problem-solving for an increasingly complex world
A recent hackathon and symposium co-sponsored by SRI and U of T's Data Sciences Institute explored new ways of using large language models responsibly, with students and faculty receiving training on how to design efficient, interdisciplinary solutions to promote responsible AI usage.
Innovating care: Exploring the role of AI in Ontario’s health sector
What opportunities and challenges are there for the use of AI in healthcare? At a recent SRI workshop, experts explored how AI is transforming Ontario's healthcare sector, highlighting its potential to improve care and exploring pressing challenges around patient involvement, health equity, and trustworthy implementation.
What do we want AI to optimize for?
SRI researcher Silviu Pitis draws on decision theory to study how the principles of reward design for reinforcement learning agents are formulated. He also aims to understand how large language models make decisions by examining their implicit assumptions. Pitis has received a prestigious OpenAI Superalignment Fast Grant to support his research.
SRI experts tackle questions about AI safety, ethics during panel discussion
What does safe artificial intelligence look like? Could AI go rogue and pose an existential threat to humanity? These were among the pressing questions tackled by SRI experts during a recent panel discussion on AI safety.
Making big leaps with small models: What are small language models and super tiny language models?
The size of language models significantly impacts their adoption and usage. SRI Policy researcher Jamie A. Sandhu explores how small models are making big impacts in the field of AI.