Risk and uncertainty: What should we do about AI?
“Do we need to pump the breaks on AI?”
This question from Steve Paikin, host of TVO’s The Agenda, opened the episode airing May 11, 2023.
The Schwartz Reisman Institute’s Director and Chair Gillian Hadfield joined Jeremie Harris, co-founder of Gladstone AI, and Pedro Domingos, professor emeritus of computer science and engineering at the University of Washington, to discuss whether AI development is moving too fast, what risks this development may pose to humanity, and what we should do about it.
“We are at a real inflection point,” said Hadfield. “We’ve seen tremendous advances with AI; what we saw in the fall was exciting and new. But maybe there’s a lot more happening here than we understood.”
In Mach 2023, an open letter published by the Future of Life Institute was signed by hundreds of the biggest names in tech, including Elon Musk and Steve Wozniak, to urge the world's leading artificial intelligence labs to pause the development of powerful new AI systems for six months, saying that recent advances in AI presented “profound risks to society and humanity.”
Hadfield and panelists discussed those risks on The Agenda: from AI getting into the hands of malicious actors to the displacement of significant parts of the workforce, the risks are varied and not all of them are yet known. Hadfield described her work on researching and developing effective regulation for AI in order to mitigate the harms it might cause.
“We regulate the rest of our economy to make sure it’s good and safe and working the way we want it to,” said Hadfield.
“Think about medical research: it takes place in a regulated structure. We have ways of testing, we have clinical trials, we have ways of deciding what pharmaceuticals, medical devices, and treatments to put out there. With AI, we have seen such a leap in capability that it has basically outstripped our existing regulatory environment. We haven’t built a regulatory environment that makes AI the kind of thing that’s safe enough.”
Recent warnings of risk by world-renowned computer scientist Geoffrey Hinton, a U of T professor emeritus whose work in developing deep learning neural networks paved the way for today’s most powerful AI systems, have made waves around the world.
Asked for her take on Hinton’s words, Hadfield said she thinks Hinton “has truly been surprised by the advances that we’ve seen in the last six months.”
“And if you listen to what Geoff has to say, it’s not that he knows there’s an existential risk, it’s that there is so much uncertainty about the way these systems behave that we should be studying this problem and not getting out ahead of it. I think that’s an important statement.”
Hadfield noted that she was one of the signatories of the open letter signed by Musk and Wozniak.
“I signed the letter so that we’d be having this conversation,” says Hadfield, who also holds a position as senior policy advisor at OpenAI, the organization that released ChatGPT.
“Currently, these powerful AI systems are being built almost exclusively inside private technology labs. Yes, those labs have a lot of concern for safety, but they’re making those decisions internally. Those are things we should be deciding publicly, democratically, with expertise outside of engineering.”
Returning to regulation, Hadfield highlighted the ways in which our current regulatory environment could improve to tackle the breakneck speed at which AI systems are developing.
“It’s really important to recognize, when we talk about regulation, that there are some building blocks that we don’t have in place,” she said.
“AI moves so fast and is so pervasive that I don’t think we can approach it as we have with our previous technologies, by saying ‘let’s put it out there and find out how it works, and then we’ll figure out how to regulate it.’ We have to be a little bit more proactive than that.”
Asked to offer a specific suggestion, Hadfield raised the concept of a national registry. If a corporation is developing or deploying a large AI model, the government should “have eyes on it,” said Hadfield.
“Here’s an analogy: every corporation in the country is registered with a government agency. They have an address; the government knows they exist. We register our cars, so we know where the cars are and who’s got the cars. Right now, we don’t have that kind of visibility into AI systems. As a starting point, we need companies to disclose basic pieces of information about the [AI] models they have.”
Hadfield’s co-panelist Harris agreed that this moment is unprecedented and fundamentally different than other technological advances.
“Nothing is guaranteed,” said Harris. “That’s part of the unique challenge of this moment. We’ve never made AI systems or intelligent systems smarter than us. We’ve never lived in a world where those systems exist. We have to deal with that uncertainty in the best way we can. Part of that is consulting with folks who actually understand these systems and specifically are experts in technical safety.”
Want to learn more?
Watch Gillian Hadfield on The Agenda with Steve Paikin discussing “Is AI an existential threat?”
Read a summary of Hadfield’s discussions on why ChatGPT is a “game changer” for AI.
Read an interview with SRI Research Lead Avi Goldfarb on the disruptive economics of AI.
Learn more about the latest developments in AI research at Absolutely Interdisciplinary 2023.
Read an SRI white paper on how global AI standards can be used for developing AI responsibly.