Humans and LLMs: Partners in problem-solving for an increasingly complex world
Hallucations. Bias. Misinformation. The relationship between humans and today’s powerful AI systems can often feel adversarial.
But what might it look like if humans and large language models worked together? What if these two agents—natural and artificial—helped each other design efficient, interdisciplinary solutions that could help people, tackle problems, and promote responsible AI usage?
Shurui Zhou, an assistant professor in the Edward S. Rogers Sr. Department of Electrical & Computer Engineering and a faculty affiliate at SRI, recently brought a group of students and faculty together to explore these and related questions.
Making big leaps with small models: What are small language models and super tiny language models?
The size of language models significantly impacts their adoption and usage. Did you know that small language models and super tiny language models can produce richer, more specific outputs that increasingly outperform larger models across various benchmarks? SRI Policy researcher Jamie A Sandhu writes about small models making big impacts in the field of AI.
SRI partners with Data Sciences Institute on “Toward a Fair and Inclusive Future of Work with ChatGPT”
Despite the growing use of ChatGPT, we lack a method to evaluate its performance and potential risks. SRI Associate Director Lisa Austin, Faculty Fellow Shion Guha, and Faculty Affiliates Anastasia Kuzminykh and Shurui Zhou are setting out to study and analyze the impact of generative AI on a wide range of communities. Learn more about "Toward a Fair and Inclusive Future of Work with ChatGPT."
SRI Seminar Series returns to explore new questions at the intersection of technology and society
The SRI Seminar Series returns for fall 2024 with leading experts across various fields, including computer science, communications, law, healthcare, and philosophy. Seminars will explore new questions at the intersection of technology and society through critical issues such as trust, inequality, public policy, and the ethical implications of AI systems.
What might the Canadian AI Safety Institute look like? Reflections on an emerging national AI safety regime
In April of 2024, the Government of Canada pledged $2.4 billion toward artificial intelligence (AI) in its annual budget, including $50 million earmarked for a new AI Safety Institute. What scope, expertise, and authority will the recently-announced Canadian AI Safety Institute likely need in order to achieve its full potential? We examine the early approaches of AI safety institutes in the UK, the US, and the EU.
From mourning to machine: Griefbots, human dignity, and AI regulation
Griefbots are artificial intelligence programs designed to mimic deceased individuals by using their digital footprint. Griefbots raise significant concerns about data collection and implications to human dignity. This article explores the digital afterlife industry and the ethical and legal challenges it presents, including a consideration of health, privacy, and property laws in Canada.
The smart way to run smart cities: New report explores data governance and trusted data sharing in Toronto
A new report from SRI Research Lead Beth Coleman, SRI Graduate Fellow Madison Mackley, and collaborators explores questions such as: How can we facilitate data-sharing across divisions to improve public policy and service delivery? What are the risks of data-sharing, how can we mitigate those risks, and what are the potential benefits of doing it right?
SRI Director David Lie and collaborators awarded $5.6 million for cutting-edge research on robust, secure, and safe AI
SRI Director David Lie and 18 collaborators—including five other SRI researchers— will receive $5.6 million in grants over the next four years to develop solutions for critical artificial intelligence (AI) challenges. Learn more about the new funding from NSERC and CSE.
Schwartz Reisman Institute announces new faculty affiliates for 2024-25
Get to know the 15 new faculty affiliates joining the SRI research community for the 2024–25 academic year. The new cohort of affiliates has expertise in a variety of fields across social sciences, humanities, and STEM disciplines, including geography, psychology, information studies, management, criminology, sociology, history, cultural studies, public health, physiology, pharmaceutical sciences, computer science, and engineering.
Schwartz Reisman Institute announces 2024 fellowship recipients
The Schwartz Reisman Institute for Technology and Society is proud to welcome four new faculty fellows and 15 graduate fellows from across the University of Toronto. SRI fellowships support interdisciplinary research projects that build new approaches to examine the complex relations between technology and society.
Secure and Trustworthy ML 2024: A home for machine learning security research
How can we help people recognize AI-generated images? Can we prevent copyrighted materials from being used in training data? What’s going on in the new field of forensic analysis of ML systems? These and related topics were at the centre of the 2024 Secure and Trustworthy Machine Learning (SaTML) conference in Toronto. Read the highlights.
Initiative trains U of T students to integrate ethical considerations into tech design
As challenges such as AI safety, data privacy and misinformation become increasingly prevalent, the Embedded Ethics Education Initiative (E3I) integrates ethics modules into select undergraduate computer science courses. In recognition of the program’s impact on the undergraduate student learning experience, the team behind E3I has won the 2024 Northrop Frye Award (Team), one of the prestigious U of T Alumni Association Awards of Excellence.