Humans and LLMs: Partners in problem-solving for an increasingly complex world
Hallucations. Bias. Misinformation. The relationship between humans and today’s powerful AI systems can often feel adversarial.
But what might it look like if humans and large language models worked together? What if these two agents—natural and artificial—helped each other design efficient, interdisciplinary solutions that could help people, tackle problems, and promote responsible AI usage?
Shurui Zhou, an assistant professor in the Edward S. Rogers Sr. Department of Electrical & Computer Engineering and a faculty affiliate at SRI, recently brought a group of students and faculty together to explore these and related questions.
The Responsible LLM-human collaboration: Hackathon and symposium, co-sponsored by the Schwartz Reisman Institute for Technology and Society (SRI) and the Data Sciences Institute (DSI), took place over two days in October. The event is part of “Toward a Fair and Inclusive Future of Work with ChatGPT,” an Emergent Data Science Program (EDSP) on which Zhou is principal investigator and SRI Associate Director Lisa Austin, Faculty Fellow Shion Guha, and Faculty Affiliates Syed Ishtiaque Ahmed and Anastasia Kuzminykh are co-leads.
The hackathon and symposium tackled questions such as:
What are some ways we can integrate the latest advancement in LLMs into the programming landscape?
What kinds of practical applications can we think of for LLMs, particularly incorporating ethical considerations and with social benefit as a goal?
Can humans use LLMs as partners to conceptualize, refine, and prototype novel tools with an AI component that might advance equity and human well-being?
On the first day of the two-day event, over 30 students from the University of Toronto gathered at the Schwartz Reisman Innovation Campus’s 10th floor winter garden for a hackathon. Students were asked to come up with an idea for a new system or app that effectively addresses a challenge or improves upon a current solution.
One team of students chose to work on a resume-screening tool that could tackle ethnicity bias in job hiring practices. Another team opted to make home cooking easier and more efficient by developing a tool that would tailor recipe recommendations to a user’s available resources and dietary preferences. A third team sought to democratize data access by helping people who are unfamiliar with structured query language (SQL)—a programming language for storing and processing information in databases—call up the kinds of data that they might need.
“Databases are usually quite large and not easily operable and accessible by those who are not familiar with SQL,” said Leo Li, a member of the team aiming to democratize data access which also included students Yue Li and Daixin Tian.
“Even someone who knows SQL well might struggle with how to write an SQL query in order to get what they want,” says Li. “So, our aim was to create an easily-accessible and interactive database chatbot that could help users access data through a process that is basically just like online chatting.”
Ashley Christendat, a student combining computer science and cognitive science to explore AI, data science, and human-computer interaction, worked on the team developing a tool to tackle bias in hiring decisions.
“By leveraging OpenAI's API, we assessed the ethnicity of a job candidate’s name and how well they fit the job description submitted,” said Christendat about her project with teammates Yan Qing Lee and Fiona Hoang. “We then used linear regression to uncover correlations between a candidate's perceived ethnicity and their scores, all while controlling for the applicant's match to the role.”
“This project not only sheds light on an important issue in recruitment,” Christendat says, “but also empowers organizations to make more informed and equitable hiring decisions.”
“The hackathon was an inspiring occasion for us to think positively about powerful AI systems,” says SRI Executive Director Monique Crichlow. “In an era of rapidly proliferating—and often justifiable—concerns about AI safety and the risks posed by advanced AI systems, it was heartening to see a group of students who will be tomorrow’s technology leaders working on novel applications for social benefit using the unique capabilities of LLMs.”
“These students demonstrated not only critical thinking and technical innovation,” says Crichlow, “but also an admirable commitment to solving practical problems for humanity. They incorporated diverse perspectives, thought about real-world impact, and always kept ethical responsibility in mind while developing their innovative tools. SRI was very proud to co-host this event and the EDSP it’s part of with DSI, and we thank Professor Zhou for her hard work in convening and executing it.”
On the second day of the two-day event, leading researchers including program co-leads Kuzminykh, Ahmed, Guha, and Ratto presented their work on the latest advancements in the development, applications, and ethical considerations of LLMs. Topics included:
How might we go about designing AI to engage in complex relational interactions like, for example, a chatbot to help people stop smoking?
How can AI help us craft data-driven storytelling for audience engagement and interaction? Can experiencing this kind of storytelling help people think better and extend the boundaries of their capabilities and imagination?
What does software engineering look like in the era of LLMs? When engineers use LLMs to build tools, how can we ensure both quality and efficiency while developing and maintaining trust in the models being used?
The Varsity published coverage of the October 5 symposium, including summaries of talks by Yasir Zaki on racial and gender biases in an image generator called SDXL and Zhijing Jin on improving an LLM’s reliability and cooperation through causal reasoning.
“Our goal is to foster a community dedicated to establishing robust guidelines, policies, and safeguards for the ethical use of LLMs,” says Zhou. “I hope our speakers and attendees gained valuable insights into the future trajectory of LLMs and their societal impact.”
The Responsible LLM-human collaboration hackathon and symposium is part of a larger collaboration between SRI and DSI titled “Toward a Fair and Inclusive Future of Work with ChatGPT.”