Social agentics: Rethinking AI’s role in human worlds
Researchers and scholars spent the day at the Schwartz Reisman Innovation Campus tackling some big-picture questions about the future of AI and society. Photo credit: Dan Browne.
As artificial intelligence systems become increasingly embedded in everyday life, the challenge is no longer simply to understand how these systems function but how they shape, and are shaped by, the social contexts in which they operate. That challenge demands not just technical knowledge, but new ways of thinking, speaking, and collaborating—tools that can bridge expert insight with broader social understanding.
On July 23, 2025, a group of interdisciplinary scholars gathered at the Schwartz Reisman Innovation Campus as part of the 2025 ACM Compass Conference to explore precisely that kind of translational work. Convened as the inaugural Social Agentics Workshop, the day-long event asked how we might design AI agents that operate not only effectively, but equitably, within the complexity of human social worlds.
Led by SRI Faculty Affiliate Matt Ratto (Faculty of Information, University of Toronto), in collaboration with Anastasia Kuzminykh, Shion Guha, and researchers from the University of Waterloo and the University of Edinburgh, and sponsored by the Schwartz Reisman Institute for Technology and Society and Faculty of Information, the workshop mixed presentations, collaborative mapping exercises, and speculative design prompts. Its aim was not to define the term social agentics—a term coined by Ratto to describe the unique impacts of conversational AI agents across social and cultural contexts—but to create an open intellectual space in which it could be explored.
“The term isn’t meant to describe a thing,” Ratto explained in a follow-up conversation. “It’s meant to generate interest—to create a space of opportunity, a set of research questions, a set of future projects that can help us better situate AI agents in society.”
That sense of open-ended inquiry ran through the entire day. In the morning, participants gathered in the SRIC’s 7th-floor space and began by identifying key themes that should guide the development of agentic AI systems. Working collaboratively on a shared Miro board, attendees contributed concepts ranging from “epistemic legitimacy” to “repairability” to “AI as a co-worker.”
For Ratto, these themes underscore the limitations of current AI design frameworks, many of which treat social context as an afterthought. “I see a lot of agentic AI being built in a decontextualized way,” he said. “Developed almost outside of human society. But if we can acknowledge AI’s embeddedness—its social positionality—we can design systems that are not just functional, but equitable.”
Later exercises built on these foundational themes. In one activity, attendees were asked to mark which concepts they believed required more attention and debate. The results revealed shared concerns about accountability, user agency, and the risk of erasing vulnerable or invisible stakeholders in AI development. “What struck me was the degree of consensus around the need for care—care in design, in use, and in thinking about unintended consequences,” Ratto reflected. “That’s not always a given in technical discussions.”
This framing—technological development as inherently social and political—was central to the workshop’s goals. Rather than reinforcing a binary between innovation and critique, the workshop invited participants to blend the two, exploring the creative and intellectual tensions that emerge when both imperatives are held together.
As one exercise demonstrated, speculative imagination can play a crucial role in this kind of inquiry. Participants were asked to complete the sentence “In a world where AI agents have complex social knowledge…” and generate possible futures. The results ranged from utopian to dystopian. One group imagined a world where agentic systems were legally required to explain themselves; another envisioned AI co-managing universities alongside human administrators.
“That ‘in a world where…’ moment was one of the most productive parts of the day,” Ratto said. “It helped us think expansively. Not just about the challenges AI presents, but about how we might radically reshape it. There’s a kind of wildness to this moment, and we wanted the workshop to embrace that.”
That wildness stems in part from the rapidly shifting technical foundations of AI itself. With the rise of large language models and generative agents, traditional models of system design—where humans explicitly structure knowledge through logic and rules—no longer suffice. What emerges instead is a new kind of design gap, one that technologists are struggling to fill.
“In the past, the excuse was, ‘We can’t model social complexity—we only have procedural logic,’” Ratto explained. “But today’s systems are probabilistic, indeterminate. That opens up space to think differently, to bring in qualitative, ethnographic, and critical methods.”
Ratto emphasized that this is not a moment to retreat from AI, but to engage more deeply with its possibilities. “Too often, there’s a reactionary impulse—either fully embrace the hype or reject AI altogether,” he said. “Social agentics is trying to carve out a third space. One that says: yes, these systems are powerful, but they’re not finished. And that means we have a responsibility to shape them.”
That shaping, he argued, must happen at all levels: from interface design to institutional governance. “Often, those goals are treated as separate: build the tech, then govern it. But I think we can do both at once,” he said. “That’s what social agentics is about—creating AI systems aligned with human life goals, including equity and inclusion.”
Universities, Ratto believes, are uniquely positioned to model that alignment. With the capacity to bring together deep technical expertise and critical social reflection, academic institutions can serve as sites of co-creation—places where development and governance are not siloed but integrated.
“We need environments where designers, ethicists, users, and critics can actually work together—not just in parallel, but in dialogue,” he said. “That’s the institutional experiment we’re trying to run here.”
Future iterations of the workshop are already in the works. This fall, the team will host sessions at the 2025 Computer Supported Cooperative Work (CSCW) conference in Bergen, Norway, and at CASCON in Toronto. Those events will adopt a more thematically focused structure, drawing on the priorities identified by SRI.
For Ratto, the July 23 gathering offered a welcome opportunity to slow down and think collectively, outside the usual constraints of deliverables and deadlines. “We wanted to make a space where we didn’t have to pretend we had all the answers,” he said. “Where it was okay to bring questions, tensions, uncertainties. That’s what makes this work interesting—and urgent.”
Want to learn more?
About the Schwartz Reisman Institute
The Schwartz Reisman Institute for Technology and Society supports and integrates world-class research across sectors and disciplines to deepen our understanding of advanced technologies, law, institutions, regulatory structures, and social values. We foster interdisciplinary ideas, insights, and understandings of how technology affects society and individuals, with a goal of rethinking technology’s role in society, the contemporary needs of human communities, and the systems that govern them.
Our mission is to deepen our knowledge of advanced technologies, societies, and what it means to be human by actively integrating research across traditional boundaries and fostering human-centered solutions that really make a difference—for everyone.
Browse stories by tag:
- AI
- AI safety
- AI trust
- Computer science
- Computer security
- Copyright
- Cybersecurity
- Data
- Economics
- Education
- Engineering
- Ethics
- GPO-AI
- Governance
- Health
- Human Rights
- In the Media
- Jobs
- LLMs
- Law
- Normativity
- Philosophy
- Privacy
- Privacy Series
- Psychology
- Public Policy
- Recommenders
- Regulation
- Reports
- Trust
- Workshops
