Humans and LLMs: Partners in problem-solving for an increasingly complex world
A recent hackathon and symposium co-sponsored by SRI and U of T's Data Sciences Institute explored new ways of using large language models responsibly, with students and faculty receiving training on how to design efficient, interdisciplinary solutions to promote responsible AI usage.
Innovating care: Exploring the role of AI in Ontario’s health sector
What opportunities and challenges are there for the use of AI in healthcare? At a recent SRI workshop, experts explored how AI is transforming Ontario's healthcare sector, highlighting its potential to improve care and exploring pressing challenges around patient involvement, health equity, and trustworthy implementation.
What do we want AI to optimize for?
SRI researcher Silviu Pitis draws on decision theory to study how the principles of reward design for reinforcement learning agents are formulated. He also aims to understand how large language models make decisions by examining their implicit assumptions. Pitis has received a prestigious OpenAI Superalignment Fast Grant to support his research.
Making big leaps with small models: What are small language models and super tiny language models?
The size of language models significantly impacts their adoption and usage. SRI Policy researcher Jamie A. Sandhu explores how small models are making big impacts in the field of AI.
SRI partners with Data Sciences Institute on “Toward a Fair and Inclusive Future of Work with ChatGPT”
Despite the growing use of ChatGPT, we lack a method to evaluate its performance and potential risks. SRI Associate Director Lisa Austin, Faculty Fellow Shion Guha, and Faculty Affiliates Anastasia Kuzminykh and Shurui Zhou are setting out to study and analyze the impact of generative AI on a wide range of communities. Learn more about "Toward a Fair and Inclusive Future of Work with ChatGPT."
Secure and Trustworthy ML 2024: A home for machine learning security research
How can we help people recognize AI-generated images? Can we prevent copyrighted materials from being used in training data? What’s going on in the new field of forensic analysis of ML systems? These and related topics were at the centre of the 2024 Secure and Trustworthy Machine Learning (SaTML) conference in Toronto. Read the highlights.