AI and trust: Security technologist Bruce Schneier explores governance in the age of machine agency

 

What does it mean to trust artificial intelligence, and how should we govern technologies we don’t yet fully understand? These were among the central questions posed at a recent event hosted by the Schwartz Reisman Institute for Technology and Society (SRI), featuring internationally renowned security technologist Bruce Schneier in conversation with SRI Director David Lie.

Known for his work on security, privacy, and emerging technologies, Schneier framed the challenges of AI governance through the lens of trust. The talk, titled AI and Trust, unpacked Schneier’s core thesis: trust in AI cannot be treated as a human trait projected onto machines, but must instead be designed, governed, and enforced through law, accountability, and technical rigor.

“We have a problem with AI — the data and control are on the same path. Now, you can't just separate it because the data is the control,” Schneier warned, referring to a traditional security paradigm where control is trusted, while data is untrusted, and so the two must be kept separate. This separation is a rule that many AI systems break. 

Safety, integrity, and regulation: The three workflows of trustworthy AI

Schneier explored the notion of three interdependent work streams that must guide the development and deployment of AI systems: AI safety, AI integrity, and AI regulation. These, he argued, form the backbone of any serious approach to AI governance.

  • AI safety focuses on ensuring that systems do not cause harm, especially when embedded in consequential decision-making environments.

  • AI integrity demands that systems function in a reliable, secure, and verifiable manner — from training data to final output.

  • AI regulation encompasses the legal and policy frameworks needed to enforce accountability and prevent abuse.

He emphasized that these concerns are not merely technical, but deeply political and social, noting:

“These are all big research questions. They’re interdisciplinary. And they are how we will create the trust that society needs in this AI era.”

Schneier called for a robust regulatory agenda — including AI transparency laws, security standards, and mechanisms to hold not just AI systems, but the humans behind them, accountable. He voiced support for the EU AI Act as a starting point, while cautioning that even it risks misplacing responsibility.

“The one mistake I think [the EU AI Act] makes is they spend a lot of time regulating the AI and not the humans behind the AI,” he said. “If you want trustworthy AI, you need trustworthy AI controllers.”


Public AI and the case for counterbalance

Beyond regulation, Schneier made a case for the creation of public AI infrastructure — models developed by academia, government, or nonprofit institutions that are not driven by corporate profit motives. These systems, he argued, could help correct for market failures and offer alternatives that prioritize democratic values.

This vision aligns closely with SRI’s mission to explore technology through a societal lens, encouraging interdisciplinary collaboration and public-interest innovation.

Audience questions extended the conversation to include concerns about monopoly power, cultural bias, and the feasibility of decentralized governance models. Schneier remained clear-eyed about the risks but also optimistic about the potential for collective solutions.

SRI Director David Lie and Bruce Schneier.

From public safety to collective governance

Schneier made a compelling case for understanding AI through the lens of public safety and collective governance. He pointed out that while corporate actors may prioritize efficiency and profit, the public interest demands a different approach — one that safeguards rights, minimizes harm, and reflects shared values.

His call to action was clear: governance must become embedded into the development process. This includes everything from auditability requirements and liability frameworks to regulatory bodies empowered to act before harms occur, not after.

Audience members engaged deeply with these themes, raising questions about consent, surveillance, and the international implications of AI policy. 

A thoughtful reckoning with the future

Throughout the event, Schneier returned to one idea again and again: trust must be systemically earned, not assumed.

“When I think of integrity in an autonomous vehicle system, I need it to accurately map the real world it is driving in — because if it gets it wrong, people will die,” he said, offering a stark reminder that questions of trust in AI are not abstract, but urgently practical.

As the conversation came to a close, Schneier left the audience with both a challenge and a vision: that AI governance is not only possible, but essential — and that now is the time to build the frameworks we’ll need for a future where machines make decisions alongside humans.

In the end, as Schneier points out, despite their flaws, “We’re going to use these systems. We’re going to trust these systems anyway — even though they’re not trustworthy... if we need to know that that whole thing is trustworthy we need integrity,” reminding us that true trust hinges not just on technology, but on the human systems behind it.

SRI Director David Lie echoed this sentiment, emphasizing that institutions like SRI have a critical role to play in convening cross-sector dialogues and producing actionable insights on emerging technologies.


Want to learn more?


Related Posts

 
Next
Next

Can a market-based regulatory framework help govern AI? New report weighs in