Our weekly SRI Seminar Series welcomes Sandra Wachter, professor of technology and regulation at the Oxford Internet Institute, University of Oxford, where she leads the Governance of Emerging Technologies (GET) Research Programme. Wachter’s work focuses on the legal and ethical implications of AI, Big Data, and robotics, addressing issues such as algorithmic bias, explainable AI, predictive policing, and health tech. A trusted advisor to governments, companies, and NGOs, Wachter is affiliated with numerous global institutions and contributes to shaping policy and regulation for emerging technologies.
In this talk, Wachter will discuss the long-term societal risks posed by large language models (LLMs). Introducing the concept of “careless speech,” a new type of harm created by LLMs that threatens to degrade knowledge and trust in democratic societies over time, Wachter will draw on examples of AI “hallucinations” to explore whether LLMs have a legal duty to tell the truth, analyzing obligations under EU human rights law, the Artificial Intelligence Act, and other regulatory frameworks.
Moderator: Anna Su
Talk title:
Do large language models have a legal duty to tell the truth?
Abstract:
Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. Our tendency to anthropomorphise machines and trust models as human-like truth tellers—consuming and spreading the bad information that they produce in the process—is uniquely worrying. They are not, strictly speaking, designed to tell the truth.
Yet they are implemented in many sectors where truth and detail matter such as education, science, health, the media, law, and finance. I coin the idea of “careless speech” as a new type of harm created by LLMs that poses cumulative, long-term risks to science, education, and shared social truth in democratic societies. These subtle mistruths are poised to cumulatively degrade and homogenize knowledge over time.
This begs the question: Do large language models have a legal duty to tell the truth?
I will show the prevalence of hallucinations, and I will assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act, Digital Services Act, Product Liability Directive and Artificial Intelligence Liability Directive. I will close by proposing ideas of how to reduce hallucinations in LLMs.
Suggested reading:
S. Wachter, B. Mittelstadt, C. Russell, “Do large language models have a legal duty to tell the truth?” Royal Society Open Science, vol. 11(8), August 2024.
About Sandra Wachter
Sandra Wachter is professor of technology and regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.
At the OII, Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies.
Professor Wachter is also an affiliate and member at numerous institutions, such as the Berkman Klein Center for Internet & Society at Harvard University, the World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, the UNESCO, the European Commission’s Expert Group on Autonomous Cars, the Law Committee of the IEEE, the World Bank’s Task Force on Access to Justice and Technology, the United Kingdom Police Ethics Guidance Group, the British Standards Institution, the Law Faculty at Oxford, the Bonavero Institute of Human Rights, the Oxford Martin School and Oxford University Press. Wachter also serves as a policy advisor for governments, companies, and NGOs around the world on regulatory and ethical questions concerning emerging technologies.
About the SRI Seminar Series
The SRI Seminar Series brings together the Schwartz Reisman community and beyond for a robust exchange of ideas that advance scholarship at the intersection of technology and society. Seminars are led by a leading or emerging scholar and feature extensive discussion.
Each week, a featured speaker will present for 45 minutes, followed by an open discussion. Registered attendees will be emailed a Zoom link before the event begins. The event will be recorded and posted online.