SRI partners with Data Sciences Institute on “Toward a Fair and Inclusive Future of Work with ChatGPT”

 
Computer generated illustration of two people facing towards computer screens with screens showing text. Blurry and pixelated aesthetics with primary colour and pastel overlay.

Can we understand the full impact of generative AI on various communities and domains? How should we evaluate the performance and potential risks of ChatGPT and related tools? A new program led by SRI researchers delves into the societal implications of using ChatGPT. 


Despite the growing use of ChatGPT and other emerging AI techniques, there is currently no systematic method to evaluate their performance and potential risks comprehensively. This gap in evaluation frameworks hampers our understanding of the full impact of generative AI on various communities and domains. To address these concerns and bridge the evaluation gap, SRI Associate Director Lisa Austin, Faculty Fellow Shion Guha, and Faculty Affiliates Syed Ishtiaque Ahmed, Anastasia Kuzminykh, and Shurui Zhou are setting out to study and analyze the impact of generative AI on a wide range of communities.

Toward a Fair and Inclusive Future of Work with ChatGPT is a program co-sponsored by the Schwartz Reisman Institute for Technology and Society (SRI) and the Data Sciences Institute (DSI) at the University of Toronto. As one of DSI’s Emergent Data Science Programs, it is focused on the responsible development and ethical implementation of generative AI, specifically examining the impact of ChatGPT on diverse communities. By delving into the societal implications of using ChatGPT, the program aims to equips researchers and users with a deeper understanding of the potential social impact and ethical considerations associated with these technologies. Through increased awareness and knowledge, users of ChatGPT will be able to navigate its usage with confidence, making informed decisions and employing responsible practices. This empowerment ensures that the broader community can harness the capabilities of generative AI while mitigating potential risks and promoting ethical use. By bridging the gap between academia and real-world applications, the project fosters a comprehensive understanding of the social implications of ChatGPT, benefiting both the academic community and the broader user base, and ultimately contributing to the responsible and impactful development of generative AI technologies.

As a starting point, the researchers will use ChatGPT as a study subject to design, develop, and evaluate a platform that aims to comprehensively assess the capabilities, limitations, and ethical considerations of generative AI. This will enable the development of guidelines, policies, and safeguards that can guide the responsible and ethical use of generative AI in various domains, promoting trust, accountability, and transparency in the AI ecosystem.

To achieve comprehensive insights, the program will feature talks, discussions, and participatory design sessions. Individuals from various backgrounds including students, instructors, practitioners, academics, and artists, will get the opportunity to share their perspectives and experiences with ChatGPT.

The first event of this program is the Responsible LLM-human collaboration: Hackathon and symposium, which takes places on October 4 and 5, 2024. Participants can choose to attend either one or both days, and registration free. No prior experience in coding, computer science, or data science is required.

Additional workshops and public-facing meetups will be organized in order to foster inclusivity and encourage open dialogue, amplifying the voices of minority communities. The program will also develop a comprehensive course syllabus module to equip students with a well-rounded understanding of the ethical considerations surrounding generative AI. By promoting collaboration and knowledge exchange, the program aims to pave the way for responsible and informed AI practitioners across diverse domains.

Want to learn more?

 


Related Posts

 
Previous
Previous

Making big leaps with small models: What are small language models and super tiny language models?

Next
Next

SRI Seminar Series returns to explore new questions at the intersection of technology and society