Democracy rewired: SRI essay series explores safeguarding democratic values in the age of AI
In a new essay series, the policy team at the Schwartz Reisman Institute for Technology and Society examines AI’s impact on the values underpinning democratic societies and governance. The series explores how AI, if left unchecked, may impact democracy – offering both an opportunity to reaffirm democratic values and critically assess the role of AI governance and regulation.
Democracy and technology have long been intertwined, with innovations gradually shaping democratic values in subtle and incremental ways.
The technological breakthroughs of the Industrial Revolution, for example, influenced nearly every aspect of daily life. They expanded access to information, economic participation and political engagement — key drivers for conceiving today’s modern democracy. Yet the innovations of this transformative period also bolstered dictatorial regimes, enabled mass propaganda and facilitated economic monopolies that shaped political influence. Technology, in other words, can erode democracy just as easily as it can foster it.
What’s clear is that innovation alone is not inherently democratizing; its impact depends on how society chooses to shape and govern its development.
Today, a similar transformation is unfolding as artificial intelligence (AI) reshapes the foundations of democratic governance. In the long history of technological change, AI is emerging as a disruptive force with the potential to either reinforce or undermine democratic values on multiple fronts.
As democracy moves further into an AI-enabled world, fundamental questions emerge: What values lie at the heart of democracy, and how are they being shaped by AI? Just as importantly, how can democracies ensure AI strengthens rather than undermines those core values?
Democracy Rewired, a new essay series from the policy team at the Schwartz Reisman Institute for Technology and Society, highlights this profound and transformative relationship between AI and the values that underpin democratic societies and governance. As part of its work on the governance and regulation of AI and other emerging technologies, the policy team develops concrete, actionable ideas to help ensure these powerful technologies contribute to a better world for everyone. Building on this effort, the series explores how AI’s impact reshapes – and challenges – the core values of democracy.
A closer look at the series
Can democracy keep pace with AI? AI is changing the world faster than the laws and institutions that shape society can adapt. This essay series explores what that means for democracy – not just in government, but in relation to individual freedoms, social cohesion, sovereignty and the modern social contract. Taken together, the essays show that these challenges are not isolated but reflect interwoven tensions between technological progress and democratic resilience. They highlight the need to critically engage with AI governance, regulation and the evolving notion of rights in the age of AI.
The state, individuals, and communities
At the heart of democratic governance lies the administrative state, tasked with shaping innovation while upholding democratic integrity.
Among the many challenges surrounding AI governance is the question of how governments can regulate AI while staying true to democratic principles. AI is advancing by leaps and bounds, driven largely by the private sector, where most expertise and control over the technology are concentrated. This creates an imbalance: on the one hand, citizens may find their needs and values insufficiently represented or protected in a landscape shaped largely by private actors; on the other hand, governments face a knowledge gap and the slow pace of democratic institutions, which struggle to keep up with the speed and complexity of innovation.
Addressing this challenge requires more than traditional policymaking — it calls for innovation within the administrative state to ensure governance remains effective in an era when expertise and influence increasingly lie outside it. Yet the challenge extends beyond government: AI also threatens democracy at the individual level, putting core freedoms at risk.
The surveillance and predictive capabilities of AI technologies are eroding once-implicit protections for privacy and autonomy. Existing safeguards — whether in law enforcement oversight or political messaging rules — are being outpaced. As AI reshapes the relationship between individuals and the state, democratic societies must rethink digital rights to ensure these fundamental freedoms remain intact. Protecting people from excessive surveillance and AI-driven manipulation is not just about preserving privacy — it’s about safeguarding independent thought and the ability to participate in democracy without undue interference.
At the same time, AI is transforming how people connect with one another. If democracy relies on the individual’s ability to reason and deliberate, it equally depends on collective engagement.
AI’s impact on civic participation is a paradox: it connects communities yet deepens divides, spreads knowledge while amplifying misinformation. Existing debates on AI and democracy tend to focus on how AI disrupts traditional engagement, but rarely explore how it might reimagine democratic participation altogether. The challenge is not only to mitigate risks, but also to seize the opportunity to create new mechanisms for deliberation — ones that reinforce, rather than replace, the human dimensions of democracy.
Evolving global & human values
Many of today’s most advanced AI systems depend on global data flows, cross-border digital infrastructure and decentralized networks — technologies that naturally defy national boundaries and traditional ideas of territorial control, long seen as central to sovereignty. At the same time, control over these systems remains concentrated in a handful of major tech firms, primarily based in a few regions, allowing these non-state actors to shape public discourse across borders.
In democratic societies, sovereignty is more than a legal concept; it underpins political legitimacy by allowing citizens to influence collective decisions through elected representatives. Yet the unchecked spread of AI risks eroding state authority and weakening the foundations of that legitimacy. As AI systems cross borders and empower private actors, they challenge the traditional role of the state and raise urgent questions about how democracy can endure when influence is no longer bound by geography. Preserving state autonomy in the age of AI will require international cooperation that safeguards democratic values while recognizing the borderless nature of the digital world.
Yet even as AI disrupts democratic governance, individual freedoms, collective engagement and sovereignty, a deeper question emerges: Is democracy itself due for an update? If AI is rewriting the rules of social and political life, then perhaps the best way to ensure a resilient democracy is for it to evolve alongside these changes.
Preserving democratic values in this context isn’t about resisting change, but about renegotiating the modern social contract to ensure AI serves human interests rather than undermining them. This means updating digital rights, building AI systems that reflect democratic values and creating new rules for how humans and AI interact.
As the series highlights, the realities of AI demand more than reinforcing existing frameworks and patchwork regulation. They require new systems and principles that could redefine what is considered fundamental to democracy, along with deeper reflection on how to rewire the relationship between democracy and technology to safeguard democratic values as they evolve in the age of AI.
Want to learn more?
About the author
Jamie A. Sandhu is a policy researcher at the Schwartz Reisman Institute for Technology and Society at the University of Toronto. With several years of experience, including work at the United Nations, various European organizations, and the Government of Canada, he specializes in geopolitics, international security, and both technology governance and the use of technology to enhance governance processes. Jamie is driven to shape policy and regulation that balances industry needs, institutional integrity, socioeconomic mechanisms, and societal well-being. His dedication has earned him a track record of guiding decision-makers in tackling cross-sector socio-economic challenges arising from technological advancements and leading efforts to bridge knowledge gaps among stakeholders to achieve shared goals and common understanding. His expertise is supported by a BA in international relations from the University of British Columbia, complemented by an MSc in politics and technology from the Technical University of Munich. Jamie’s current research interests revolve around international cooperation on AI and advancing AI safety through a socio-technical approach to AI governance.