David Duvenaud reflects on post-AGI workshop
Questions about humanity’s future in a world shaped by artificial general intelligence (AGI) are moving quickly from science fiction to pressing, everyday concerns. As progress toward superintelligence accelerates, the question is no longer whether such a future will arrive, but how societies should prepare for it.
When it comes to artificial intelligence, SRI Chair David Duvenaud argues that now is the time to ask difficult questions and to ground speculation in evidence. With AI poised to transform how we live, work and relate to one another, Duvenaud and his colleagues convened the Post-AGI Civilizational Equilibria workshop on July 14, 2025, with support from the Schwartz Reisman Institute.
“One of my hopes with this workshop was that it would make more people realize that there is still basically no plan or positive vision out there,” says Duvenaud, “but I'm worried it'll have the opposite effect, and make it seem like experts are calmly handling the situation.”
The event brought together a highly curated mix of researchers and policy experts from academia, industry, government, philanthropy and research organizations to critically examine possible futures beyond the development of AGI and the choices that lie ahead as we approach an unprecedented technological horizon.
This composition ensured the discussions reflected perspectives from across the technical, policy, and governance landscape. The following summary provides an overview of the talks, highlights from the discussions, and reflections on the state of the field.
The talks
Joe Carlsmith – “Can Goodness Compete?”
Carlsmith asked whether cooperative strategies can survive against highly competitive agents that maximize replication and resource consumption. This raised fundamental questions about how societies might structure themselves in the face of relentless competition.
Watch: 29 minutes
Richard Ngo – “Flourishing in a Highly Unequal World”
Ngo suggested that future beings may differ enormously in their levels of intelligence and power. He argued for the importance of “healthy asymmetric” relationships—such as those between parents and children—as a possible model for coexistence in unequal societies.
Watch: 23 minutes
Morgan MacInnes (University of Toronto, Political Science) – “The History of Technologically Provoked Welfare Erosion”
Presenting joint work with Allan Dafoe, MacInnes argued that technological competition can sometimes pressure states to reduce protections for their own citizens, highlighting parallels with past historical episodes.
Watch: 13 minutes
Liam Patell (GovAI) – “Evolutionary Game Theory and the Structure of States”
In a direct response to MacInnes, Patell used evolutionary game theory to argue that, under certain conditions (e.g., when only two states are in competition), equilibria can emerge that preserve citizen welfare. Hosting both talks in sequence demonstrated the value of open debate and intellectual exchange in this field.
Watch: 6 minutes
Jacob Steinhardt (CEO, Transluce) – “Post-AGI Game Theory”
Steinhardt explored how future AIs may shape their own development. He proposed proactively creating large datasets of AI behaviors that reflect pro-social values, so that future language models learn these defaults. He likened the effort to giving children moral fables as a way of shaping values.
Watch: 6 minutes
Anna Yelizarova (Windfall Trust) – “Scenario Planning for Transformative AI’s Economic Impact”
Yelizarova examined possible patterns of wealth concentration in an AI-transformed economy, drawing on empirical evidence to forecast where such concentrations may arise.
Watch: 5 minutes
Fazl Barez (University of Oxford) – “Resisting AI-Enabled Authoritarianism”
Barez explored which AI capabilities are likely to empower states, and which could empower citizens, in order to anticipate how AI might shift the balance of political power.
Watch: 7 minutes
Ryan Lowe (Meaning Alignment Institute) – “Co-Aligning AI and Institutions”
Lowe argued that AI alignment strategies must extend beyond technical design to include the institutions in which AI is developed and deployed. He also highlighted the challenges of defining human values in practice, noting limitations of using system prompts or preference orderings.
Watch: 6 minutes
Steven Casper (MIT; UK AI Safety Institute) – “Taking the Proliferation of Highly-Capable AI Seriously”
Casper addressed the risks of widely available, highly capable open-weight AI models, and outlined practices that could mitigate dangers even under conditions of broad dissemination.
Watch: 7 minutes
Tianyi Alex Qiu (Peking University) – “LLM-Mediated Cultural Feedback Loops”
Qiu presented empirical work on “culture lock-in,” where AI-generated outputs influence human outputs in ways that create reinforcing feedback loops, potentially entrenching particular values or practices.
Watch: 7 minutes
Beatrice Erkers (Existential Hope) – “Scenarios for Near-Term Futures”
Erkers described two scenarios: (1) a “tool-AI” future built on coordination to limit agentic AGI, and (2) a “d/acc” future shaped by decentralized technological development.
Watch: 5 minutes
Avid Ovadya (AI & Democracy Foundation) – “Democratic Capabilities for Good AGI Equilibria”
Ovadya considered how democratic institutions could be adapted to manage pressures from AI development, including the introduction of AI delegates and new mechanisms for large-scale coordination.
Watch: 5 minutes
Kirthana Singh Khurana (University of British Columbia, Law) – “Corporations as Alignment Mechanism Laboratories”
Khurana argued that corporations face alignment challenges analogous to AI systems: both must be constrained to act in ways consistent with the public good. Audience members suggested that studying corporate misalignment—and the mechanisms developed to correct it—could inform AI alignment research.
Watch: 8 minutes
Reflections
The workshop underscored the value of convening diverse perspectives to address questions that no single field can answer alone. By bringing researchers, policymakers and industry leaders into the same conversation, SRI researchers encourage the dialogue that is essential to confronting the possibilities of a post-AGI future.
“Several themes emerged from the day,” says Duvenaud. “First: historical parallels and analogies are a useful tool for foresight, putting abstract discussion about AGI in concrete contexts. The world will change massively, but it's not a total mystery what the main forces affecting the future are likely to be.”
“I wish we had prompted the speakers to articulate more precise hypotheses about the future, even implausible ones,” Duvenaud continues. “I think that brainstorming is useful at this stage and speculation by experts is undersupplied, maybe because it looks relatively amateurish. Plus, I think this exercise would have made it clearer to outsiders just how undeveloped thinking in this area is in general.”
However, for Duvenaud, big questions persist. “Even despite this breadth of discussion, no one has yet proposed, in my mind, any especially plausible trajectory in which human interests are respected post-AGI,” he says. “That’s why we have to gather people from different disciplines to find all the relevant expertise, figure out the important points of disagreement, open questions, and begin building a new field of research.”
Looking ahead
The next iteration of this workshop—Post-AGI Culture, Economics, and Governance—will take place on December 3, 2025, co-located with NeurIPS in San Diego. The program will feature an expanded lineup of speakers and will continue to foster critical discussion about the cultural, economic, and institutional dynamics of a post-AGI future.
The December speaker lineup includes:
Max Tegmark, MIT & Future of Life Institute
Anton Korinek, University of Virginia (tentative)
Iason Gabriel, Deepmind
Alex Tamkin, Anthropic Societal Impacts Team
Anders Sandberg, Institute for Futures Studies
Ivan Vendrov, Midjourney
Ajeya Cotra, Open Philanthropy (tentative)
Michiel Bakker, Deepmind & MIT
Beren Millidge, Zyphra
Atoosa Kasirzadeh, Carnegie Mellon University
Deger Turan, Metaculus
Wil Cunningham, Deepmind & University of Toronto