Risk and uncertainty: What should we do about AI?

 

Is AI development moving too fast? In a panel on TVO’s The Agenda with Steve Paikin, SRI Director and Chair Gillian Hadfield was joined by Jeremie Harris and Pedro Domingos to discuss what risks AI poses to humanity, and what we should do about it. (Image credit: TVO.)


“Do we need to pump the breaks on AI?”

This question from Steve Paikin, host of TVO’s The Agenda, opened the episode airing May 11, 2023. 

The Schwartz Reisman Institute’s Director and Chair Gillian Hadfield joined Jeremie Harris, co-founder of Gladstone AI, and Pedro Domingos, professor emeritus of computer science and engineering at the University of Washington, to discuss whether AI development is moving too fast, what risks this development may pose to humanity, and what we should do about it.

“We are at a real inflection point,” said Hadfield. “We’ve seen tremendous advances with AI; what we saw in the fall was exciting and new. But maybe there’s a lot more happening here than we understood.”

In Mach 2023, an open letter published by the Future of Life Institute was signed by hundreds of the biggest names in tech, including Elon Musk and Steve Wozniak, to urge the world's leading artificial intelligence labs to pause the development of powerful new AI systems for six months, saying that recent advances in AI presented “profound risks to society and humanity.”

Hadfield and panelists discussed those risks on The Agenda: from AI getting into the hands of malicious actors to the displacement of significant parts of the workforce, the risks are varied and not all of them are yet known. Hadfield described her work on researching and developing effective regulation for AI in order to mitigate the harms it might cause.

“We regulate the rest of our economy to make sure it’s good and safe and working the way we want it to,” said Hadfield.

“Think about medical research: it takes place in a regulated structure. We have ways of testing, we have clinical trials, we have ways of deciding what pharmaceuticals, medical devices, and treatments to put out there. With AI, we have seen such a leap in capability that it has basically outstripped our existing regulatory environment. We haven’t built a regulatory environment that makes AI the kind of thing that’s safe enough.”

Recent warnings of risk by world-renowned computer scientist Geoffrey Hinton, a U of T professor emeritus whose work in developing deep learning neural networks paved the way for today’s most powerful AI systems, have made waves around the world.

Geoffrey Hinton

University Toronto professor emeritus Geoffrey Hinton, who is a member of the Schwartz Reisman Institute’s Advisory Board, recently left his position at Google to speak out about AI safety risks. (Supplied image.)

Asked for her take on Hinton’s words, Hadfield said she thinks Hinton “has truly been surprised by the advances that we’ve seen in the last six months.”

“And if you listen to what Geoff has to say, it’s not that he knows there’s an existential risk, it’s that there is so much uncertainty about the way these systems behave that we should be studying this problem and not getting out ahead of it. I think that’s an important statement.”

Hadfield noted that she was one of the signatories of the open letter signed by Musk and Wozniak.

“I signed the letter so that we’d be having this conversation,” says Hadfield, who also holds a position as senior policy advisor at OpenAI, the organization that released ChatGPT.

“Currently, these powerful AI systems are being built almost exclusively inside private technology labs. Yes, those labs have a lot of concern for safety, but they’re making those decisions internally. Those are things we should be deciding publicly, democratically, with expertise outside of engineering.”

Returning to regulation, Hadfield highlighted the ways in which our current regulatory environment could improve to tackle the breakneck speed at which AI systems are developing.

“It’s really important to recognize, when we talk about regulation, that there are some building blocks that we don’t have in place,” she said.

“AI moves so fast and is so pervasive that I don’t think we can approach it as we have with our previous technologies, by saying ‘let’s put it out there and find out how it works, and then we’ll figure out how to regulate it.’ We have to be a little bit more proactive than that.”

Asked to offer a specific suggestion, Hadfield raised the concept of a national registry. If a corporation is developing or deploying a large AI model, the government should “have eyes on it,” said Hadfield.

“Here’s an analogy: every corporation in the country is registered with a government agency. They have an address; the government knows they exist. We register our cars, so we know where the cars are and who’s got the cars. Right now, we don’t have that kind of visibility into AI systems. As a starting point, we need companies to disclose basic pieces of information about the [AI] models they have.”

Hadfield’s co-panelist Harris agreed that this moment is unprecedented and fundamentally different than other technological advances.

“Nothing is guaranteed,” said Harris. “That’s part of the unique challenge of this moment. We’ve never made AI systems or intelligent systems smarter than us. We’ve never lived in a world where those systems exist. We have to deal with that uncertainty in the best way we can. Part of that is consulting with folks who actually understand these systems and specifically are experts in technical safety.”

 
 

Browse stories by tag:

Related Posts

 
Previous
Previous

AI regulation in Canada is moving forward. Here’s what needs to come next.

Next
Next

Absolutely Interdisciplinary returns to spark new insights into the future of AI