DEMOCRACY REWIRED:

Safeguarding democratic values in the age of AI

Download the full essay series (PDF)

An essay series by the Schwartz Reisman Institute for Technology and Society.

This series is authored by the policy team at the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto. The policy team focuses on the governance and regulation of AI and other emerging technologies, with a strong emphasis on developing concrete, actionable ideas that are mindful of the practicalities of implementation.

PREFACE

Democracies worldwide are grappling with challenges that have arisen over recent decades, such as rising populism, globalization, the impact of social media, and the automation of work. Amidst this turbulent time of transformation, artificial intelligence (AI) emerges not simply as a tool, but as a transformative technology that could either support or undermine democracy. This essay series, Democracy rewired, seeks to unpack the profound relationship between AI and the values that underpin democratic societies and governance. This series examines AI’s complex and nuanced impact on individual rights, social cohesion, state sovereignty, and the social contract. By doing so, the intention is to understand how AI might shape democracies in the years to come, to potentially renew the commitment to core democratic values but also to critically engage with AI governance and regulation, and the evolution of rights in the digital age. This means not merely reinforcing existing frameworks but actively contending with the development of new systems and values that may redefine the aspects of democracy that are considered fundamental. 

As democracy proceeds further into an AI-enabled world, the essential question is: how can democracies ensure AI strengthens their values rather than undermine them? As this series reveals, a genuine shift in values will soon necessitate a confrontation with AI’s impact on democratic principles. It will challenge society to align the impact of AI with fundamental democratic ideals. The series presents crucial considerations of AI governance that, while balancing innovation, prioritizes the preservation of democratic trust and integrity, urging thoughtful action to shape AI in service of democratic values.

EXPLORE THE ESSAYS

  • Written by David Baldridge

    This series begins by considering the urgent question of democratic governance: can it stand firm in the face of AI’s vast potential? Today, the promises and perils of technology are pressing upon systems of law, policy, and administration, challenging democratic governments to rise to the task of regulating AI in ways that align with democratic ideals.

    Introduction

    Democracies around the world are facing fundamental challenges that emerged since the mid- to late 2000s. Issues like rising populism, addictive and data-hungry social media platforms, as well as the automation of work, all present pressing policy issues. In this moment of change and vulnerability for democratic societies, AI is poised to further disrupt democratic government and regulation through myriad of political, social and economic impacts while also providing opportunities to bolster the core values of democratic society. Given these high stakes, ensuring the effective governance and regulation of AI is a necessary and worthy pursuit.

    This piece examines the relationship between AI and the administrative state and tackles a fundamental challenge: democratic governments are struggling to effectively regulate AI.

    The idea of democracy often evokes notions of elections, prominent politicians, and kitchen-table issues such as taxes, healthcare, and the state of infrastructure. However, democracy goes beyond these everyday considerations. It reflects the values that enable sharing power across a population by setting rules and policies through open debate, effective delegation, and fair processes, as well as by protecting human rights and fundamental freedoms. The manifestations of these values can escape the notice of many citizens who view democracy as nothing more than showing up at a voting centre once every few years. The essential components of democracy go far beyond the simple act of voting in free and fair elections. They include the protection and exercise of individual freedoms of speech and assembly, community organizers’ promotion of political awareness and action, and the institutional structures that keep politicians in check—from an impartial judiciary and media to organizations such as the United Nations. Given the diversity of processes, rights, institutions, and actions that constitute the values of a democratic society, how significantly will AI disrupt democratic processes and thus undermine democratic values?

    AI and the administrative state

    AI is often described as a “general-purpose technology”, one that has the potential to be used in and thus transform almost every aspect of human life, including social interaction, labour markets, commercial activity, and the delivery of services like healthcare and education.1 Democratic spaces, mechanisms, and institutions are equally susceptible to disruption from AI. Democratic values, which hold these pillars of democracy in place, have already been undermined by AI systems in a variety of ways. For example, they have eroded individual rights,2 polarized political discourse,3 and shaped global digital economics in a way that has been characterized as “colonial.”4 On the other hand, AI could also support democratic values by enhancing public access to information. AI systems have been used in education initiatives and healthcare awareness programs5 demonstrating the potential of these technologies to improve public participation by fostering an informed citizenry. Thus, while the development of AI can be aligned with democratic values to preserve and enhance democratic societies and governance for future generations, the failure to do so may invite severe consequences. However, early attempts to reap the benefits of these technologies while avoiding their negative consequences have faced serious challenges, including the concentration of innovation and knowledge in the private sector and the speed of AI development compared to the necessarily slow pace of democracy.

    AI and the concentration of power

    AI innovation is largely driven by private companies. While universities and public research help to advance these efforts, the development and deployment of the most impactful AI systems is concentrated in the private sector.6 Moreover, some of the most powerful and impactful systems today are influenced by and partnered with large technology companies that have the resources to develop and deploy AI systems rapidly at a global scale. For example, the generative AI systems ChatGPT and Claude from startups like OpenAI and Anthropic have benefited from the resources they have obtained through partnerships with Microsoft and Amazon. The tremendous strides made possible by these partnerships, such as OpenAI’s 2024 release of their own video-generation AI system, have surprised and concerned the world.7

    Furthermore, AI expertise is concentrated in the private sector, and especially in a limited number of large, powerful firms.8 This situation seriously impairs the capability of democratic governments to regulate AI, as both governments and the general public lack the expertise necessary to grapple with the impacts of this new technology. Even if governments can impose effective regulation, AI is rapidly evolving, and its dynamic nature means the rules that governments set quickly become outdated. For example, policymakers in Canada unveiled an attempt to regulate AI in June 2022, which quickly became largely obsolete with the emergence of sophisticated generative AI. The legislation died on the order paper following the prorogation of Parliament in January 2025 and is unlikely to be reintroduced, especially as Canada’s new Minister for AI and Digital Innovation recently signaled a shift in focus away from regulation and toward harnessing economic benefits. What this approach will look like in practice remains unclear. Put simply, emerging technologies move quickly; regulations do not.9

    The challenge of AI regulation by democratic governments

    The challenge of AI regulation by democratic governments goes beyond the dynamic nature of AI or the consolidation of knowledge in the private sector. It also lies in ensuring a balance between effective governance and the principles of consultation, informed public participation, and respect for individual and corporate autonomy. Democratic systems traditionally value input from citizens, strive for transparency and accountability, and seek to avoid undue intrusions. Navigating these principles within the rapidly evolving landscape of AI poses a unique challenge for regulatory frameworks. By design, democracy is slow even at the best of times. The processes described above are time consuming. Policymakers and citizens become aware of a problem, inform themselves about it, deliberate over possible solutions and then assess those solutions for their effectiveness. Society is witnessing the difference in speed between the development of regulation in democracies and AI advancement in the private sector play out in real time. Democratic processes are taking years to develop AI regulation, as was observed in Canada10 and the European Union.11 Meanwhile, the knowledge gap12 between the government and private sector continues to grow, giving rise to a new worry for policymakers: the economic13 and national security implications of falling behind the AI innovation curve.14 All these challenges expose one core truth: the ability to apply democratic values to AI through new regulation is crucial. Failure to do so could undermine the entire democratic system as the private sector and authoritarian states consolidate expertise, knowledge, and control.

    If democratic governments cannot effectively regulate AI, the site of AI regulation will move elsewhere. Specifically, governance initiatives will be led by industry itself, as it has the expertise, flexibility, innovative mindset, and understanding of AI’s future development necessary to create and adapt effective rules, standards, and certification systems. Although these are important elements of AI governance, self-governance alone is an inadequate approach to regulation.15 Private companies are always incentivized to prioritize profits and market share over the public interest. Thus, giving private companies responsibility to regulate themselves may lead to regulatory systems that do not fully serve the public interest and which also undermine public trust in AI regulation. This is what happened when a lack of effective regulation caused the 2008 financial crisis, severely undermining public confidence in the financial sector and requiring an overhaul of United States financial regulation.16

    Authoritarian governments may also be afforded an advantage by inefficient democratic decision-making on AI. Authoritarian governments, by swiftly infusing AI with their political values and shaping governance to align with their interests, might wield more influence over both the development of AI and its governance than proponents of democracy would find acceptable. This phenomenon is already starting to materialize in nations frequently characterized as authoritarian. China, for example, moved quickly to implement regulation in various sectors of AI development, including generative AI, which some have suggested is influenced by the government’s desire to maintain control over political discourse and content generation.17

    A way forward

    Democratic governments do not have the capacity to effectively regulate AI due to the technology’s dynamic nature, the concentration of expertise in the private sector, and the necessarily slow pace of democratic institutions. This is a problem worth solving which will require innovative forms of regulation. For example, some researchers at Schwartz Reisman Institute for Technology and Society have proposed an AI registry18 or a market-based third-party auditing regime19 as potential policy solutions to effectively regulate AI. To meet these challenging governance tasks, democracy must be robust. Ensuring that it remains a viable form of government in the age of AI starts with a close examination of AI’s other impacts on democratic values, as subsequent pieces in this series aim to do.

  • Written by David Baldridge and Maggie Arai

    Democracy is not just a system, it is the beating heart of individual freedom. In essay two, attention is turned to the inherent rights of privacy and autonomy, which are eroding due to AI’s capacity to observe, shape, and influence these foundational values that give democracy strength. Protecting these values becomes crucial not only to safeguard individual rights but also to reinforce the democratic principles that rely on them. 

    Introduction

    Individual freedom is a core democratic value. It is what differentiates genuinely democratic societies from authoritarian regimes that legitimize themselves through performative elections. Civil liberties and democratic values are mutually reinforcing. Democracy ensures that people can replace politicians who are restricting their liberties. Similarly, civil liberties protect the democratic process by empowering individuals to exercise their rights, form and express diverse opinions, and actively participate in shaping the direction of their government without fear of repression or arbitrary control. While civil liberties like freedom of speech and voting rights are essential in democratic societies, they are not democratic values themselves. Instead, they stem from core democratic values like equality and dignity. Among these principles, privacy and autonomy are particularly relevant in the context of AI, given the profound ways AI can shape personal freedom and individual decision-making. 

    AI’s impact on privacy 

    The insights AI applications can generate from personal information are raising concerns about privacy in a variety of sectors. New applications of AI can transform the privacy implications of existing infrastructure and data collection. Consider, for example, the new implications of being filmed by security cameras in the world of deepfakes and facial recognition technology, or of accessing video therapy now that therapists may use an AI notetaker without your knowledge or consent. This brief focuses on the privacy concerns raised by surveillance, since it is an aspect of privacy that closely and clearly impacts democratic values.

    Surveillance refers to “any activity that monitors behaviours, actions, or communications.”20 It can range from broad efforts, like surveillance cameras covering high traffic public areas such as shopping malls and airports, to targeted surveillance of individuals through phone wiretaps, location tracking, and human observation. The impact of AI on surveillance is already becoming clear. The use of AI in the criminal justice system is subject to particular scrutiny because enhanced surveillance powers for the state have serious implications for its citizens, namely, criminal punishment. For example, jurisdictions around the world are grappling with how to regulate police use of facial recognition technology or even whether to permit its use at all.21 While police forces may seek more limited applications of such technology, it is not difficult to imagine Orwellian scenarios where facial recognition is used in ways that sacrifice individual rights under the guise of “preventing crime.” Indeed, private companies are already using this technology to identify and police adverse parties.22

    One of the most frightening aspects of this situation is that it does not require significant changes to the surface-level relationship between citizens and the state. Rather, it challenges the existing equilibrium between personal privacy and legitimate state needs to engage in surveillance and maintain order. There is an accepted degree of state surveillance in daily life in the form of video camera surveillance, direct observation by police officers, fingerprinting, and DNA analysis. There is a level of comfort with breaching individual personal privacy, sharing personal images on social media, using facial recognition to unlock phones, and allowing location tracking for ease of navigation. AI significantly increases the accuracy and intrusiveness of the insights that can be determined based on this data. Essentially, the existing mechanisms of state surveillance now have better and more sensitive information at their disposal.

    The insights AI can glean from existing data and infrastructure facilitates intrusive surveillance that undermines the core of a free and democratic society. The United States Supreme Court has quoted Chief Justice John Roberts saying that new technologies can facilitate surveillance which would “alter the relationship between citizen and government in a way that is inimical to democratic society.”23 This is because surveillance concerns more than just basic violations of privacy. Widespread surveillance can create a chilling effect and dissuade people from exercising other fundamental freedoms, such as freedom of speech and assembly, which are core to any functioning democracy. AI has the capacity to significantly increase the effectiveness of state surveillance, thereby magnifying this chilling effect. For example, facial recognition technology can process huge amounts of video surveillance footage in real time and provide accurate identification of individuals who attended a protest. Predictive policing tools could scan internet traffic and allow police to pre-empt protests. Exercising rights to free speech and assembly, even on issues of fundamental importance, comes under threat if there is a virtual certainty of being identified and arrested after the fact or if a protest is prevented from the outset.

    AI’s impact on autonomy

    Autonomy is a vital democratic value, as it underlies the freedom for individuals to think, make decisions, and act without undue influence. For example, autonomous thinking is a necessary precursor to free and informed decision-making, which is the cornerstone of the democratic process. Autonomy also includes the freedom to pursue individual interests. In a democratic society, people are free to explore new ideas, innovate, and contribute their unique talents, which fuels social progress and economic growth. This freedom to think and act independently fosters creativity and diversity of thought, which are vital elements of a functioning democracy. 

    False information and attempted influence have always challenged people’s autonomy. For example, politicians and advertisers have long been accused of distorting, exaggerating, and even completely ignoring the truth. However, as AI becomes increasingly capable of mimicking human thought, it presents new and bigger challenges to people’s ability to think freely about emergent political, economic, and social problems. Today’s powerful AI can enhance the quantity and quality of misinformation and facilitate the creation of hyper-personalized digital content. 

    AI-powered image generators can be used to create fake photorealistic images to perpetuate political narratives. Examples include AI generated images of Donald Trump interacting with Black Americans24 and a Toronto mayoral candidate using exaggerated images of homeless encampments25 on his website. Malicious actors can use AI-powered bots to flood social media with false or misleading content, polluting the information ecosystem and lowering trust in information among the general public. 

    AI can also impact an individual’s political autonomy by using personal information (freely available in the era of social media) to create hyper-personalized content to sway individual opinions. The ability to target consumers with ads and content specifically tailored to their desires may not seem particularly insidious at first—after all, it could be considered useful to show a new parent when there are discounts on diapers. Yet when considered at a higher level, it becomes clear that such hyper-personalization also allows for insidious uses. People could be targeted based on their political views, for example, to attempt to sway these views in advance of an election. It is not a far stretch to imagine such targeting being used to amplify extremist viewpoints or incite violence. 

    Political parties and consultants have been engaging in the personalization of content and misinformation tactics for as long as mass audiences have existed. However, AI’s ability to create and distribute content at an unprecedented scale (and to considerably greater effect than previously possible) significantly exacerbates these problems. The sheer volume of realistic but false content has the potential to pollute the entire information ecosystem. This has serious ramifications for the ability to make informed, independent decisions based on a clear understanding of the facts—the essential basis of freedom of thought. In extreme cases, it may even go so far as to entirely displace autonomous political thinking. This concern is not limited to deliberate acts of interference.26 Even unintentional actions by AI systems, such as recommendation algorithms on social media, can amplify polarizing content, subtly influencing public opinion and destabilizing political landscapes without any explicit intent.

    In addition to undermining political autonomy through the proliferation of disinformation and hyper-personalization, AI threatens democratic autonomy in a subtler but equally significant way: by undermining the conditions necessary for independent thought and creative expression. The growing use of AI in artistic and journalistic workspaces has sparked concern not only over copyright infringement and job displacement, but also over the erosion of the economic and creative agency of individuals. Journalism informs democratic decision-making; art encourages self-reflection and challenges dominant narratives. When AI systems flood these domains with synthetic content or devalue human work, they constrain the capacity of individuals and communities to think critically, express themselves freely, and participate meaningfully in public life. Protecting these professions is thus not merely a matter of economic fairness—it is essential to preserving the democratic value of autonomy.

    AI has a clear impact on privacy and autonomy: the existence of technology-facilitated surveillance and dis/misinformation is well documented. Intrusive surveillance and misinformation are harmful in and of themselves, but their explicit threat to democratic society, particularly the ways in which they are exacerbated by rapidly advancing AI technologies, demands deep thinking about how to protect individual freedom in the age of AI. 

    Considerations to protect individual rights and freedoms

    The technological barriers that previously defended basic individual freedoms are being lifted. It is generally accepted that police may review security camera footage in an attempt to identify criminal suspects and that political parties may seek to tailor their messaging to influence individual voting behaviour. Regulations exist to limit both of these activities. Police surveillance is a heavily regulated process and, though political parties are not subject to privacy regulation in Canada, there have been past legislative efforts aimed at imposing new regulation in this area.27 The problem is that advanced AI systems are overwhelming these existing or proposed safeguards.

    There is a strong need to craft policies and regulations that protect basic individual rights. Firstly, similar to how Canadian common law recognizes most health and financial data as sensitive personal information, the use of personal information for political purposes could also be considered sensitive in order to put appropriate limits on personalized political messaging. Additionally, election advertising is already subject to considerable regulation in most democracies; specific provisions for AI-facilitated advertising could be considered here as well. Finally, in Canada, political parties are not currently subject to federal data protection law.28 Its data protection laws are in dire need of modernization, an attempt that failed alongside Canada’s attempt to federally regulate AI when Bill C-27 died on the order paper in January. Without federal AI legislation or adequate data protection, Canadians are vulnerable to exploitation and intrusion on their rights.

    A way forward

    The era of AI demands a renewed focus on privacy and autonomy, both as inherent goods and to ensure that other fundamental freedoms are not dampened by a chilling effect resulting from mass surveillance. The emergence of these new technologies demands a rethink of the boundary between the citizen and the state, or the public and the private. These boundaries are defined by rights. Constitutional rights protect citizens from state overreach, and property rights demarcate spaces of private control, among others. Rights evolve to adapt to new technologies, such as the recognition that privacy rights should extend to SMS conversations in recognition of the emergence of texting as a new medium of private conversations.29

    Democracy at the individual level depends on access to trusted information and the ability to think critically, enabling independent reasoning and decision-making. Protecting an individual’s freedom to reflect on the kind of society they wish to live in is essential to democratic values. This process must remain free from undue pressure, influence, and surveillance. AI cannot be allowed to make democratic deliberation an artificial process.

  • Written by Alicia Demanuele, Maggie Arai, and Monique Crichlow

    This third essay widens the perspective from the individual to the collective, emphasizing social engagement and meaningful public discourse as essential values for an informed democratic populace. The impact of AI is examined as both a destabilizing force and promising participatory tool for social cohesion, highlighting the challenge it poses to the collective engagement necessary for democratic legitimacy. 

    Introduction

    Individual rights and freedoms are the foundation of democratic societies, ensuring that citizens can actively participate in governance and public life. Fundamental liberties, such as freedom of speech, expression and assembly, along with the right to vote, are not just valued principles but essential pillars that uphold democracy itself. However, the core tenets of democracy are not just about the individual. Democracy in and of itself refers to the “power of the people”—it encapsulates the importance of social identity, which emerges from shared norms, values, and practices that shape how individuals see themselves within a collective. This foundation fosters social cohesion—the sense of solidarity, belonging, and mutual commitment that enables collective decision-making. 

    AI is reshaping the way groups engage with each other and in democratic processes, influencing how information is shared, discussions unfold, social identity is formed, and collective decisions are made. As these technologies become more embedded in public life, they have the power to erode the foundations of informed and inclusive social engagement or strengthen democratic participation. The challenge lies in ensuring that, through deliberate governance and safeguards, AI serves as a tool for enhancing, rather than undermining, the social cohesion on which democracy depends. 

    AI as a destabilizing force in social cohesion 

    Democracies thrive on a diversity of freely exchanged perspectives and opinions, fostering mutual understanding and empathy, rich public discourse, and social cohesion. A key part of this cohesion comes from the way individuals build social identities through shared affiliations. These identities, in turn, enable collective agency—the ability of groups to define cultural norms, reinforce shared values, and coordinate action toward common goals. Sustaining cohesion depends on an open and trustworthy information ecosystem where diverse perspectives can interact, disagreements can be navigated, and shared understandings can emerge.

    However, ideological polarization in Canada is growing,30 and can be exacerbated by technologies like AI. By offering highly customized content online, AI algorithms can often reinforce certain beliefs through repetition, or even cause users to live in a personalized “filter bubble” or “echo chamber” where they are not exposed to ideas outside of the ones they already hold.31

    Increasingly personalized content fragments shared reality, making it more difficult for individuals to form social identities rooted in common experiences. Without a shared sense of reality, connecting with others becomes more challenging, weakening social cohesion and deepening divisions that hinder mutual understanding or compromise. These divisions are reinforced by the digital economy where online revenue, driven primarily by ads, increases with user attention and engagement. When algorithms prioritize corporate profit over the health of democracy, they exploit the attention economy by delivering content that captures user interest, often amplifying negative or sensational material that drives the most engagement.32 Without appropriate safeguards, AI threatens to intensify narrative uniformity, fundamentally altering how information is spread and accelerating ideological polarization at the expense of a diversity of perspectives.33 This undermines the open exchange of ideas and robust discourse vital to the social fabric of democracy and fractures the shared reality essential for collective decision making.

    Disinformation and the erosion of democratic integrity 

    A healthy democracy depends on an informed citizenry, where individuals have access to credible information to make knowledgeable choices during elections, participate in public debate, and hold leaders accountable. However, the online information ecosystem has become increasingly polluted. AI enables the rapid creation and dissemination of mass disinformation, threatening the credibility and authenticity of information that democracies rely on. While disinformation has always shaped public discourse through mechanisms like propaganda or biased reporting, today’s environment presents a uniquely challenging threat due to the unprecedented scale and sophistication of AI-generated content. 

    People have often relied on trusted, established institutions like journalism to provide structure for verifying information and distinguishing between credible and misleading content. However, in the AI era, models such as ChatGPT are increasingly relied upon as sources of information, functioning like search engines, but with the ability to generate human-like responses in a conversational manner. While this shift reshapes how information is accessed, it also raises serious concerns given that even leading AI models have been shown to produce inaccurate, incomplete, misleading, and potentially harmful answers to critical information, such as election queries.34 These shortcomings weaken public discourse, democratic participation, and trust in institutions.

    Beyond spreading falsehoods, AI-driven disinformation blurs the line between authentic public sentiment and artificially amplified narratives, making it harder to discern what constitutes genuine public opinion. This is evident in the broader phenomenon of astroturfing—the creation of fake grassroots efforts to portray a false impression of widespread shared opinion. Astroturfing strikes at the core of AI’s threat to collective agency. It creates the deceptive appearance of large-group consensus where no such consensus exists, manipulating the “power of the people” to instead serve individual interests. This is especially dangerous because human decision-making is deeply influenced by social cues—what others believe shapes our own beliefs, and conformity often carries social and psychological value.35When large-scale consensus is fabricated, public discourse loses its legitimacy. For instance, nefarious actors could use astroturfing tactics to influence proposed regulation by flooding public consultations with AI-generated responses disguised as citizen input. Such nefarious uses of AI may cause politicians to look at constituent correspondence with a high degree of skepticism,36thereby reducing the impact of traditional public opinion engagement that is characteristic of democracies. 

    Technology has skewed perceptions of institutions and processes that once functioned as important checks and balances, deepening skepticism, sowing division, and complicating efforts to maintain an informed public. 

    Preserving the integrity of collective engagement

    Groups have always been central to how humans organize, and play an important role in democratic societies. People naturally exist within groups—whether in workplaces, neighbourhoods, religious communities, activist organizations, or online networks—which shape how we connect, engage, and navigate society. Through these shared affiliations, individuals transition from personal identity to social identity, participating in subcultures that influence their beliefs, behaviors, and actions. In a democracy, such groups play a vital role in shaping public discourse, ensuring policies reflect the collective will and holding governments accountable.

    One way groups drive change in practice in democratic societies is through public interest organizations, which advocate for policy reforms, social justice, and the protection of collective rights. These groups rely on their ability to organize, mobilize supporters, and amplify underrepresented voices to influence decision-making. Traditionally, this has been achieved by forming collective movements, such as unions or advocacy groups, that gain strength through their membership and broader public awareness of their cause. AI-driven disruptions to the information ecosystem make it harder for these groups to reach and amplify marginalized voices, especially given that AI algorithms tend to reflect “averages” in online discourse, 37 which may cause majority views and concerns to be favoured at the expense of minority groups. 

    At the same time, the traditional mechanisms that advocacy groups depend on, such as petitioning governments or rallying public support, lack leverage over technology companies, which control the AI systems and data shaping public discourse. As a result, advocacy efforts are increasingly distorted, as major tech firms—driven by profit rather than public interest—hold growing influence over the shared spaces where democratic debate and knowledge mobilization take place.38

    If collective agency erodes, the consequences extend beyond the loss of agreement—they threaten to undermine coordinated social and political action, weakening democratic governance. Policies risk becoming detached from the public will, compromising democratic legitimacy. As a result, AI’s role in society must be carefully managed to ensure that it enhances, rather than diminishes, our capacity to organize, deliberate, and act together. Achieving this will require rethinking existing mechanisms, or developing new ones, for democratic deliberation and participation that are tailored to an AI- and data-driven world. This includes government interventions to curb the concentration of power among tech firms, transparency measures to ensure algorithmic accountability, and innovations in civic engagement that empower diverse voices. Without such efforts, the erosion of collective agency risks deepening democratic deficits, making societies more susceptible to manipulation and fragmentation.

    AI as a promising participatory tool for social engagement

    Despite the challenges AI presents for social and democratic engagement, it may also be an effective tool in realizing the collective benefits of democracy. Generative AI tools, like ChatGPT, have the potential to increase civic knowledge by allowing complex political and policy issues to be presented in an accessible way that aligns with individual learning styles.39 Notwithstanding aforementioned issues with accuracy, reliability and hallucination, AI can serve as one of many tools to facilitate learning and enable curiosity. In an ideal setting, citizens who are informed about current issues are better positioned to meaningfully participate in groups and exercise that collective agency to promote shared progress and interests.

    Deliberation is another fundamental element of democracy by which individuals and groups can interject their perspectives to influence the social order and normative values that are core to the social contract. AI has the potential to enhance these deliberative processes, thus strengthening society’s collective ability to govern.40 A prime example of this is vTaiwan, a decentralized open consultation platform that facilitates collaborative discussions on national issues.41One of its core tools is Pol.is, an online tool for large-scale conversations focused on finding solutions to a variety of issues within the digital economy. It uses AI to discern clusters of similar sentiment, helping citizens understand competing perspectives and bridge divides by highlighting commonalities among polarized groups.42 By facilitating meaningful deliberation and enhanced citizen participation, tools such as Pol.is present clear opportunities for strengthening collective agency. 

    AI tools can also provide administrative support for the types of routine tasks necessary to uphold various democratic processes, such as summarizing and translating audio and text outputs from collective deliberations. In practice, it can be difficult for government bodies to effectively process all public input—AI tools can assist with this by processing enormous amounts of data. In a similar vein, AI tools can collect, summarize, and make sense of public sentiment for both governments and civil society, bringing collective opinion and consensus to the forefront for key decision-makers. One example of such technology is Delib, which provides governments and related organizations with tools that facilitate citizen participation in decision-making. 

    Finally, AI can be leveraged by public interest and advocacy groups to amplify their efforts, empowering citizens to participate more actively in collective initiatives and influence policymaking. For instance, in the United States, the platform Civis helped the Human Rights Campaign—the nation’s largest civil rights organization advocating for LGBTQ equality—streamline their data analysis and reporting processes.43 This enabled the organization to allocate more time and resources to engaging with constituents and supporting pro-equality candidates. By providing powerful tools like these, AI can bolster the impact of collective agency, allowing advocacy groups to focus on advancing their missions and amplifying their voices in the political and social spheres.

    These examples illustrate that AI does not only present a threat to social cohesion and the collective agency vital to well-functioning democracies; it also presents many opportunities to amplify and support democratic processes. However, it is important to note that AI applications are currently overwhelmingly centralized in a few large technology firms, with a focus on corporate, rather than public, interest. It is therefore vital to pay close attention to how AI continues to shape and impact democracy, and intervene to ensure this impact is beneficial rather than detrimental.

    A way forward

    AI is reshaping the social foundations of democracy, influencing not just how we engage with information online, but how we forge social identities, build consensus, and exercise collective agency. As a society, we must preserve our ability to connect with others, form groups, and participate meaningfully in democratic processes. 

    As policymakers design frameworks and interventions to unlock the benefits of AI, an approach that maintains critical aspects of democratic integrity is necessary. Such approaches should promote deliberation, and balance different voices and power asymmetries in deriving solutions. If left unchecked, AI risks fragmenting the very social fabric that enables democratic engagement and collective decision-making. It’s essential then that AI be harnessed in a way that supports, rather than diminishes, our collective capacity to deliberate and act together.

  • Written by Jamie Sandhu and Monique Crichlow

    This fourth essay goes beyond borders, to confront AI’s impact on democratic sovereignty in an interconnected world. Today’s sovereignty must span beyond physical borders, rising to meet the digital domain. The essay discusses how international cooperation is no longer a choice but a necessity that might counterbalance AI’s destabilizing effects on democracies, with collaborative frameworks as a potential safeguard. 

    Introduction 

    Prior essays in this series have traced the evolving relationship between AI and democratic values, highlighting challenges at the administrative state, individual, and group levels. This piece shifts focus to a global perspective, emphasizing sovereignty, which now extends beyond physical borders into a globalized world fueled by AI. States must navigate powerful actors, complex technologies, and the trade-offs between maintaining control and fostering global interconnectedness—all while upholding the democratic values inherent in global interactions and rooted in the rule of law that underpins the international governance system to which states adhere. This essay surveys opportunities for international governance to counterbalance AI’s destabilizing effects, suggesting that collaborative frameworks may be essential to preserving state sovereignty and democratic integrity in the face of rapid technological change. 

    A paradigm shift for sovereignty and democracy 

    As AI continues to evolve within a globalized system—shaped by various forms of government, powerful private entities, and expanding control over digital spaces—sovereignty is increasingly strained. Traditionally, sovereignty has been rooted in territorial control, with land borders defining the extent of state authority. This foundational understanding has allowed states to retain authority not only over political, economic, and social affairs within their borders, but also to manage external relations such as regulating cross-border movements of goods, services, people, and capital. 

    However, the advent of advanced AI complicates traditional notions of state autonomy. AI’s reliance on global data flows, cross-border digital infrastructure, and decentralized networks of actors weakens states’ ability to regulate and control AI within their borders while simultaneously drawing states into decisions on foreign policy over concerns of being excluded from the benefits of AI adoption. External actors, including multinational corporations and foreign governments, significantly influence the development and governance of AI, challenging the state’s monopoly on authority.

    In democratic societies, sovereignty is a core value of political legitimacy, enabling citizens to exercise power through their elected government. Yet, without stronger global governance frameworks, the unchecked proliferation of AI risks undermining state sovereignty and threatens the very foundation of democratic legitimacy. The impact of AI on democratic governance tests the resilience of sovereign state power in an increasingly interconnected world in several ways.

    Compromised governance and sovereignty 

    One major challenge lies in the AI-rich global economy, where AI technology companies are overwhelmingly concentrated in Western democracies—especially the United States.44 This imbalance of power can undermine states and force them to relinquish their sovereignty in various ways. 

    One way this imbalance manifests is through digital extractivism—a process researchers describe as firms extracting vast amounts of data from users around the world. This extraction thrives in regulatory environments with minimal oversight, enabling firms to collect data with little accountability to the countries of origin.45 These firms—often located in countries with advanced infrastructure, substantial research and development funding, and large user bases—use this extracted data from states and local industries to develop profitable and proprietary AI models. 

    Moreover, data collected from around the world is transferred across borders, landing in servers located in major data-hosting nations. This practice subjects data to foreign legal frameworks, regardless of their origin. As highlighted by experts, regulators in the nations where data originates often lack the means or leverage to counter these practices, weakening or eliminating state sovereignty over digital assets and reinforcing the economic inequalities that have long defined global relations.46

    This situation not only helps extractivist states to gain economic power via technological development, but also allows firms to shape regulations in their favour, further diminishing national control over key resources. These trends are especially evident in regions with weak data protection laws or outdated legal frameworks that struggle to keep pace with AI advancements. In fact, this regulatory landscape is extremely common. While the European Union and China have enacted strict data localization laws to safeguard their sovereignty, such measures remain rare globally.

    In the global AI economy, firms influence both foreign and domestic policies and regulations, which in turn cement their dominance. Nations lacking substantial AI development capabilities then become reliant on foreign-controlled technologies and data infrastructure. In a global AI-driven economy, this reliance compromises their ability to govern and protect their citizens’ interests independently. As a result, the economic and political decisions of these nations are increasingly shaped by the priorities of technology giants and the countries where these firms are based, rather than by their own governments. This dependency not only jeopardizes control over critical resources but also threatens the very foundation of national governance.

    AI, election integrity, and national security

    While foreign control over critical components of a state’s policy is not unprecedented, AI intensifies these challenges by transcending national borders and influencing global decision-making processes. Its vast data requirements and deep integration into the societal fabric give it a unique global reach, making it a significant player in shaping domestic and international political landscapes. AI-driven technologies, as explored in earlier essays on individual freedoms and social values, possess the power to influence public opinion, automate decisions, and even predict or manipulate behaviour on a global scale—often without sufficient oversight from any one nation. With regard to sovereignty, the problem is that AI facilitates voter manipulation which, in turn, grants some foreign control over a nation-state.

    A key example of AI’s global influence on democracy is election integrity. The Cambridge Analytica scandal,47 in which AI-driven algorithms were used to micro-target voters in multiple countries, 48 raises concerns about how foreign entities might manipulate democratic processes on a global scale. This concern is not limited to deliberate acts of interference.49 Even unintentional actions by AI systems, such as recommendation algorithms on social media, can amplify polarizing content, subtly influencing public opinion and destabilizing political landscapes without any explicit intent.

    As AI technologies become more sophisticated, so too does their potential to interfere with democratic systems—raising growing concerns about national security. One prominent example is the use of AI-generated deepfakes to spread mis/disinformation50 and undermine election integrity.51 This issue has drawn the attention of policymakers around the world, leading to legislative responses in countries like the United States52 and Canada,53 aimed at safeguarding democratic processes from foreign AI-enabled interference.

    Beyond elections, the global implications of AI extend into military and national security strategies. AI-powered technologies are transforming modern warfare and challenging traditional understandings of sovereignty and territorial control. For instance, autonomous weapons systems, as described by researchers,54 use advanced algorithms to detect movement patterns and analyze communications, enabling states to identify threats and carry out targeted operations with unprecedented accuracy. However, these same capabilities can be co-opted by hostile actors to conduct espionage, disrupt critical infrastructure, or enter a nation’s airspace undetected, as is increasingly possible with precision drones—thereby undermining sovereignty. Moreover, researchers warn that growing reliance on foreign AI technologies may erode national control over security operations, particularly in the absence of robust international frameworks to limit external influence on sovereign defense strategies.55

    The erosion of domestic control over critical infrastructure and political discourse challenges traditional notions of sovereignty and weakens the foundations of democratic governance. As national borders grow more permeable to global technological influence, states face a difficult choice: to strengthen sovereignty through self-reliance or to engage with international AI governance frameworks.

    On one hand, global cooperation can help address transnational security risks and foster innovation. But when unbalanced or dominated by a few powerful actors, it can also expose states to external pressures that compromise autonomy and democratic decision-making. On the other hand, a more protectionist stance might better preserve national control—but at the cost of isolating a country from the very collaborations and technological advances needed to safeguard democracy in a connected world.

    Precarious futures and the opportunity for international governance 

    States less aligned with Western democratic values are emerging as major leaders in AI,56 and offering alternative models57 for an AI-enabled society. As the influence of AI development from these regions grows, like China’s Digital Silk Road, countries may face tough decisions about whether to align their policies with a Western model that emphasizes democratic governance or a less democratic alternative that offers advantages like sovereign state control and economic integration.58

    In practice, this could mean adopting AI in ways that prioritize state control over data and technology, often at the expense of transparency or individual privacy. For instance, states might opt for AI-driven surveillance or centralized data collection technology, enabling stronger economic integration and sovereign control. At the same time, they may move away from privacy-focused AI policies and decentralized characteristics typically advocated by Western democracies. Such decisions could not only shape internal governance, but also influence global AI partnerships and technological standards in ways that drift away from core democratic values.

    This dynamic has profound implications for global society. AI empowers states to innovate their economies, strengthen national security, and improve public services. Yet it also introduces pressure by forcing governments to navigate trade-offs between advancing domestic priorities and engaging in global cooperation. For example, while AI can drive economic growth and bolster security, dependence on systems developed by foreign entities or shaped by international frameworks without adequate safeguards or inclusive governance can compromise sovereignty—granting external actors influence over critical infrastructure and limiting a state’s autonomy. In this way, AI becomes a double-edged sword: it offers strategic advantages, but also intensifies the tension between protecting national interests and engaging with global responsibilities in the digital age.

    Given these tensions, international governance may offer the most viable path for balancing state autonomy with global cooperation. Rather than forcing states to choose between isolation and vulnerability, it creates space for shared responsibility and mutual protection. This approach builds on post-war foreign policy theory, which holds that global challenges, such as AI development, cannot be addressed in isolation, but rather through cooperation. By establishing shared standards and rules, international governance enables states to collectively manage the risks of AI, preventing more powerful actors from exerting disproportionate influence over global digital infrastructures. Such an approach helps maintain stability and protect democratic institutions. Just as global democratic governance was once championed to create resilient, equitable societies, an international AI governance framework could now provide a balanced way forward—safeguarding state autonomy while enabling technological progress. Without such frameworks, smaller or less technologically advanced states risk being left vulnerable to external control over critical AI-driven technologies.

    This strategy could be effectively pursued through three critical avenues. First, by prioritizing international policy that addresses the overlapping concerns of an AI-enabled economy, countries can work together to navigate shared challenges. Second, ensuring appropriate protections for the data that drives AI systems—through the development of an extraterritorial data governance framework—could help safeguard citizens’ data while enabling the fair use of data by those developing AI technologies. Finally, the development of international standards for AI governance could support cross-border activities and promote reliability, fairness, accountability, and privacy in AI products and services. A coordinated global approach to AI governance would not only support the development of strong mechanisms and policy alignment but also help ensure that the sovereignty of all states is respected, regardless of their technological capacity.

    A way forward

    The rapid advancement of AI presents both significant opportunities and complex challenges for state sovereignty. Emerging forms of AI, such as multi-agent systems and frontier AI models, challenge or undermine state sovereignty by making autonomous decisions that transcend national boundaries. AI not only tests traditional concepts of sovereignty but also adds layers of complexity to governance in a highly interconnected world. As these technologies evolve, they underscore the tension between maintaining domestic control and engaging in global integration. The task ahead is to craft collaborative, international governance structures that not only respond to the technological and economic impacts of AI, but also safeguard democratic governance and state autonomy. 

  • Written by Monique Crichlow and David Baldridge

    Finally, the question of the social contract itself remains. As AI reshapes societal norms and governance structures, this fifth essay contemplates how democracies may need an updated social contract to align AI’s impact with democratic principles. To achieve this, the essay outlines three essential conditions for this renewal: a political commitment to democratic stability, the integration of democratic values into AI design, and the establishment of new governance frameworks. 

    Introduction

    As this series has made clear, AI will significantly impact democratic values, institutions, and processes, from the individual to state levels. Necessarily then, it will impact the modern social contract—a concept rooted in Western thought and central to many democracies. Put simply, the social contract holds that individuals are willing to give up some freedoms to a legitimate authority in exchange for the benefits and stability of social order. However, if AI is challenging this arrangement, is there a need for a new social contract for the era of AI, one that preserves the core elements of democratic societies while adapting to new socio-technical realities? 

    A new social contract 

    The social contract asserts that the authority of government stems from the consent of the people. It is a reciprocal agreement between the people and the state, wherein the people submit to the authority of a sovereign in exchange for protection, stability, and ongoing order. Democracy plays a crucial role in upholding the social contract by grounding it in shared values, which are transformed into rules, laws, and governing institutions to create and maintain collective reciprocity and benefits. 

    Foundational to this arrangement is human agency—the capacity to think logically, act rationally, and exercise self-restraint to pursue long-term collective stability and mutual respect. Through rational decision-making, “the people” influence, shape, and benefit from the structures and agreements established under the social contract. 

    AI and its increasingly agentic capabilities challenge these ideas. This transformational technology is demonstrating the ability to make rational and logical choices that generate value and benefits that rival or outperform functions once reserved exclusively for humans, like the ability to create art or work and never get tired. Furthermore, recommendation algorithms and targeted advertising powered by AI on social media platforms can shape public discourse by prioritizing certain viewpoints, influencing consumer behaviour, and reinforcing echo chambers. Additionally, phenomena like “dead internet theory” underscore concerns about AI-generated content dominating online interactions.59

    By demonstrating agentic capabilities of its own and radically transforming the information ecosystem, AI is disrupting the social contract. While a social contract implies a certain degree of universalism, AI tools can be deployed as instruments of socioeconomic or geopolitical power, interfering with labour markets and democratic processes to the advantage of those with the resources and lack of scruples to fully exploit this frontier technology. Moreover, the proliferation of AI agents undermines the basic human ability to think logically and act rationally. Such processes are eroded when humans face an information ecosystem that has been ravaged by deepfakes, misinformation, as well as the repeated bombardment of psychologically powerful algorithms that manipulate and divert audience attention. Consequently, affirming a core set of democratic values that will be persistent despite AI’s transformational effects is non-negotiable.

    In this context, establishing a new social contract for the AI era is about ensuring that the rules, laws, and institutions that uphold democratic values are preserved in the face of significant technosocial change. This new social contract would address human agency, participation in shaping social tradeoffs, and state sovereignty, all exercised through legitimate forms of authority in response to changing normative preferences and social contexts.

    Affirming values and rights that matter

    Evaluating which values and rights to uphold to preserve democracy is not about passing normative judgment on AI or debating its social status. It simply acknowledges that, by virtue of the social contract, AI’s role in society necessitates guidance to ensure its alignment with democratic values and social order. This begins with affirming human agency, guarding against undue influence, and addressing implications for societal organization, state authority, and the creation of new rights in the age of AI. 

    First, if human agency and the ability to freely pursue ideas and generate outputs is to remain a core tenet, the desire to safeguard against undue influence and interference ought to extend to AI as well. In practical terms, this might include the right to have certain decisions made entirely by humans, free from AI influence, or, at least, the right to be informed if, and to what extent, AI was used in making a decision. 

    Second, safeguarding the ability to connect socially, organize into groups, and engage in the democratic process through inclusive deliberation and norm-setting must be prioritized. Without fora and freedom to deliberate on societal values that underlie a democracy, it cannot exist. For example, the very act of affirming rights in the age of AI requires this. There already exist sharply divergent views on the proper development and role of AI in society. Inclusive processes should help construct a network of rights that appropriately balance the views of different groups and protect the widely held core values. 

    Third, it is essential to ensure that states and their administrative structures are respected as having legitimate authority over their populations, borders, and affairs. This includes addressing the power asymmetry between those who develop AI systems and the state, ensuring that the power to set rights and regulations remains balanced. The fair distribution of economic prosperity must also be a priority, with states playing a key role in shaping AI’s impact on their economies. Ultimately, this approach will ensure the preservation of the democratic state and the rule of law. 

    Finally, some required rights will be wholly new concepts, unique to the world of AI. Some of these rights will protect the individual from the novel threats AI poses to personal freedom. This could include the right not to be digitally replicated, to decline digital immortality, or to have freedom from AI influence over certain high-stakes decisions like criminal punishment or access to expensive experimental medical treatments. In some contexts, rights and obligations will be bestowed on AI systems themselves, like the ability to earn or hold property and be sued for harmful activity.60These emerging rights and obligations will extend the social contract to account for these novel agentic actors within society and create a distinct category of legal relationship. It will cement AI’s new role in democratic society by appropriately accounting for the social, political, and economic significance of this new technology. 

    A way forward

    The path to achieving a new social contract that preserves democratic values requires three essential conditions to be met. First, there must be strong political will and incentives to maintain social stability and engage in the process of articulating democratic values as new digital rights. Second, these values must inform AI system design and technical standards. This ensures that AI technologies—especially those with agency and those operating in high-stakes areas—do not negatively impact distinctly human interests. This requires careful consideration of both design principles and implementation practices. Finally, new administrative structures, such as rules and institutions, must be established to provide governance over evolving human-AI interactions, particularly when AI operations diverge from core democratic principles.

    As society is rapidly transformed by the increasing adoption of advanced AI systems, the social contract must be updated to ensure that AI supports, rather than undermines, democratic values. Building such a contract preserves the foundational elements of democratic societies while adapting to new socio-technical realities. It simultaneously safeguards humanity, while recognizing that democracy and innovation require ongoing processes of experimentation, compromise, and trade-offs.

  • AI and Democratic Governance

    1 Rock , Daniel, and Frank Rudzicz. “Machine Learning in the Workplace; Absolutely Interdisciplinary 2023.” Schwartz Reisman Institute for Technology and Society. YouTube, August 24, 2023.

    2 Funk , Allie, Adrian Shahbaz, and Kian Vesteinsson. “Freedom on the Net 2023: The Repressive Power of Artificial Intelligence.” Freedom House, 2023.

    3 Burton, Joe. “Algorithmic Extremism? The Securitization of Artificial Intelligence (AI) and Its Impact on Radicalism, Polarization, and Political Violence.” Technology in Society, 75. November 2023.

    4 Arora, A., Barrett, M., Lee, E., Oborn, E., and Prince, K. “Risk and the Future of AI: Algorithmic Bias, Data Colonialism, and Marginalization.” Information and Organization 33, no. 3. September 2023.

    5 Karl, Jonathan. “How Healthcare Chatbots Are Expanding Automated Medical Care.” HealthTech, August 14, 2020.

    6 Davidson, Nikki. “What Government Can Learn from the Private Sector About AI.” Government Technology, July 10, 2024.

    7 Associated Press. “Sora, OpenAI’s new text-to-video tool, is causing excitement and fears. Here’s what we know about it.” Euro News. February 18, 2024. 

    8 Makridis, Christos, and Gil Alterovitz. “Measuring and Understanding Differences in Private and Public Sector Technology Jobs: Evidence from Artificial Intelligence Job Posting Data.” Lightcast. SSRN. July 12, 2024.

    9 Hadfield, Gillian, Maggie Arai, and Isaac Gazendan. “AI Regulation in Canada Is Moving Forward. Here’s What Needs to Come Next.” The Hill Times, May 22, 2023.

    10 Canada. Parliament. House of Commons. Bill C-27, Digital Charter Implementation Act, 2022, 44th Parliament, 1st Session, November 22, 2021–January 6, 2025. Sponsored by the Minister of Innovation, Science and Industry.

    11 Gilbert, Stephen. “The EU Passes the AI Act and Its Implications for Digital Medicine Are Unclear.” npj Digital Medicine 7, no. 135. May 22, 2024.

    12 Cass-Beggs, Duncan. “A Welcome Voice for Canada on the Future of AI.” Centre for International Governance Innovation, April 30, 2024.

    13 The Agenda. “Is Canada Falling Behind on AI?” TVO Today. Youtube. April 15, 2024.

    14 Allen, Gregory C., and Isaac Goldston. “The Biden Administration’s National Security Memorandum on AI Explained.” Center for Strategic and International Studies, October 25, 2024.

    15 Wong, Alyssa. “Regulatory Gaps and Democratic Oversight: On AI and Self-Regulation.” Schwartz Reisman Institute for Technology and Society, University of Toronto, September 21, 2023.

    16 Goodwin, Keith. “Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010,” Federal Reserve History, July 21, 2010

    17 Sheehan, Matt. “China’s AI Regulations and How They Get Made.” Carnegie Endowment for International Peace, July 10, 2023

    18 Hadfield, Gillian, Mariano-Florentino (Tino) Cuéllar, and Tim O’Reilly. “It’s Time to Create a National Registry 

    for Large AI Models.” Carnegie Endowment for International Peace, July 12, 2023.

    19 Clark, Jack, and Gillian K. Hadfield. “Regulatory Markets for AI Safety.” arXiv (2019)

    Individual freedoms and AI

    20 “Surveillance,” Information and Privacy Commissioner of Ontario, February 2025.

    21 “Privacy guidance on facial recognition for police agencies,” Office of the Privacy Commissioner of Canada, May 2022.

    22 Sarah Wallace, “Madison Square Garden’s Ban on Lawyers Suing Them Can Remain in Place, Court Rules,” NBC 4 New York, March 30, 2023.

    23 Jameer Jaffer and Alexander Abdo, “Supreme court cellphone case puts free speech – not just privacy – at risk,” The Guardian, November 27, 2017.

    24 Marianna Spring, “Trump supporters target black voters with faked AI images,” BBC, March 4, 2024.

    25 Hannah Alberga, “Three-armed person mistakenly exposes AI-generated images in Toronto mayoral platform,” CTV News, June 13, 2023.

    26 Giardano De Marzo, “Are online recommendation algorithms polarising users’ views?,” Polytechnique Insights, January 24, 2024.

    27 Liliane Langevin et al., “Canada’s New Elections Bill to Limit Third-Party Financing, Advertising and Disinformation,” Blakes, Cassels & Graydon LLP, April 9 2024.

    28 Alex Boutilier, “Canada’s political parties are exempt from privacy laws. Voters say that needs to end,” Global News, April 25 2023.

    29 Jessica Zita, “R. v. Marakah: Outgoing 

    Balancing AI and social cohesion

    30 Ling, Justin. “Far and Widening: The Rise of Polarization in Canada.” Public Policy Forum, August 1, 2023.

    31 Helbing, Dirk, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari, and Andrej Zwitter. “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific American, February 25, 2017.

    32 Lewsey, Fred. “Slamming Political Rivals May Be the Most Effective Way to Go Viral.” University of Cambridge, June 22, 2021.

    33 Dubow, Ben. “Part Two: Is AI a Threat or Opportunity for Democracies?” CEPA, August 2, 2023.

    34 Angwin, Julia, Alondra Nelson, and Rina Palta. “Seeking Reliable Election Information? Don’t Trust AI.” Proof, February 27, 2024.

    35 Chan, Jovy. “Online Astroturfing: A Problem beyond Disinformation.” Philosophy & Social Criticism 50, no. 3 (June 16, 2022): 507–28. https://doi.org/10.1177/01914537221108467.

    36 Kreps, Sarah, and Douglas Kriner. “How Generative AI Impacts Democratic Engagement.” Brookings, March 21, 2023. 

    37 Jungherr, Andreas. “Artificial Intelligence and Democracy: A Conceptual Framework.” Social Media + Society 9, no. 3 (July 16, 2023). https://doi.org/10.1177/20563051231186353.

    38 Sanders, Nathan, Bruce Schneier, and Norman Eisen. “How Public AI Can Strengthen Democracy.” Brookings, March 4, 2024.

    39 Schiff, Kaylyn Jackson, and Daniel S Schiff. “Generative AI like ChatGPT Could Help Boost Democracy – If It Overcomes Key Hurdles.” The Conversation, November 7, 2023.

    40 Landemore, Hélène. “Fostering More Inclusive Democracy with AI.” IMF Finance & Development Magazine, December 2023.

    41 Horton, Chris. “The Simple but Ingenious System Taiwan Uses to Crowdsource Its Laws.” MIT Technology Review, August 21, 2018.

    42 Tang, Audrey, Rosalind Liu, and Wendy Hsueh. “Digital Democracy in the Age of AI.” Public Digital Innovation Space, September 15, 2023.

    43 “Data Integration & Identity Resolution for Human Rights Campaign.” Civis Analytics, October 12, 2023.

    The evolution of sovereignty in the age of AI

    44 White, Joe, and Serena Cesareo. “The Global AI Index.” Tortoise Media, 2024.

    45 Kannan, Prabha, and Neema Iyer. Neema Iyer: Digital Extractivism in Africa Mirrors Colonial Practices. Stanford HAI. Stanford University, August 15, 2022.

    46 Feldstein, Steven. “New Digital Dilemmas: Resisting Autocrats, Navigating Geopolitics, Confronting Platforms.” Carnegie Endowment for International Peace, November 29, 2023.

    47 Confessore, Nicholas. “Cambridge Analytica and Facebook: The Scandal and the Fallout so Far.” The New York Times, April 4, 2018.

    48 Owen, Taylor. “What We Know - and Don’t Know - about Microtargeting and Its Influence on Political Behaviour.” Centre for International Governance Innovation, December 5, 2019.

    49 De Marzo, Giordano. “Are Online Recommendation Algorithms Polarising Users’ Views?” Polytechnique Insights, January 25, 2024.

    50 Ofcom. Deepfake Defences: Mitigating the Harms of Deceptive Deepfakes. July 23, 2024.

    51 Bleisch, N. David. “Deepfakes and American Elections.” American Bar Association, May 6, 2024.

    52 “Governor Newsom Signs Bills to Combat Deepfake Election Content.” Government of California, September 17, 2024. Governor of California.

    53 Government of Canada, and The Honorable Marie-Josée Hogue, Volume 5, Chapter 19, Recommendations to Better Protect Against Foreign Interference in Canada’s Democratic Institutions and Processes. Public Inquiry into Foreign Interference in Federal Electoral Processes and Democratic Institutions. January 28, 2025.

    54 Muthyala, John. “Drones and Surveillance Cultures in a Global World.” Digital Studies/le Champ Numérique 9, no. 1. September 27, 2019.

    55 Csernatoni, Raluca. “Governing Military AI amid a Geopolitical Minefield.” Carnegie Endowment for International Peace, July 17, 2024.

    56 “AI Index Report 2024: Measuring Trends in AI.” Artificial Intelligence Index, 2024.

    57 Zhang, Angela Huyue. “The Promise and Perils of China’s Regulation of Artificial Intelligence.” Columbia Journal of Transnational Law (forthcoming). January 28, 2024.

    58 Cheney, Clayton. “China’s Digital Silk Road: Strategic Technological Competition and Exporting Political Illiberalism.” Web log. Council on Foreign Relations (blog), September 26, 2019.

    A new social contract for an AI-enabled world

    59 Hern, Alex. “TechScape: On the Internet, Where Does the Line between Person End and Bot Begin?” The Guardian, April 30, 2024.

    60 Hadfield, Gillian. “How to Prevent Millions of Invisible Law-Free AI Agents Casually Wreaking Economic Havoc.” Fortune, October 17, 2024.

  • Arai, Maggie, David Baldridge, Monique Crichlow, Alicia Demanuele, and Jamie Sandhu. 2025. Democracy Rewired: Safeguarding Democratic Values in the Age of AI. Toronto: Schwartz Reisman Institute for Technology and Society. https://srinstitute.utoronto.ca/news/democracy-rewired.