Gillian K. Hadfield: Regulatory technologies can solve the problem of AI

 
In this week’s Saturday Debate in The Toronto Star, Schwartz Reisman Institute Director Gillian K. Hadfield argues that we can meet the challenge of AI with regulatory technologies. Image and text republished courtesy of The Toronto Star.

In this week’s Saturday Debate in The Toronto Star, Schwartz Reisman Institute Director Gillian K. Hadfield argues that we can meet the challenge of AI with regulatory technologies. Image and text republished courtesy of The Toronto Star.


Fear is the human condition. But so too is designing rules and systems that manage what frightens us.

Our ancestors were rightly afraid of predators on the savannah. That’s why they developed the rules needed to live in stable groups that could co-operate to defend their members.

Today, it’s reasonable for us to be afraid that the food we eat might make us sick. Or that people might steal our hard-earned dollars. Or that those in power might exploit us for personal gain. But it’s also reasonable to overcome those fears with the confidence that food safety rules, criminal laws, and citizen protections against government abuses are working reasonably well.

AI is no different. It’s reasonable to worry about the impact it might have on our societies. But it’s also reasonable to expect that we’ll meet this challenge by building regulatory systems that ensure AI is delivering on its massive potential for raising the quality of human life without destroying societies in the bargain.

AI could vastly reduce human error, improve analysis and forecasting, and make service provision more efficient and tailored for the diversity of populations and communities around the globe. With AI, we could make huge strides in improving health care, battling climate change, ensuring safer cities, responding to pandemics like COVID-19, making life more affordable and accessible for marginalized populations, and much, much more.

There’s no question that AI can cause real harms, such as perpetuating bias in employment, insurance, and criminal justice and fomenting political polarization and misinformation. AI will also automate some, possibly many, jobs and make them obsolete. If this happens faster than we can adapt with new jobs and new ways of sharing economic wealth, we risk economic dislocation and deepening inequalities.

But I think these are problems we can solve. In one sense, AI brings us new problems—the scale, global nature, and speed of AI development outstrip anything we have confronted before. But in another sense, the problem is age-old: how do we adapt our systems of rules to keep up with social and economic change?

We faced that challenge when our societies advanced beyond small hunter-gatherer bands that could regulate with conversation around the fire—Indigenous people around the globe invented democratic councils for this purpose. When ancient Mesopotamians invented the wheel, about 4,000 years ago, they also invented written laws like Hammurabi’s code and more elaborate political structures for complex civilizations. When the Industrial Revolution transformed agricultural economies, we invented large-scale democracies and the regulatory state.

And that’s where we are today: facing the need for radical reinvention of how we regulate in order to keep up with powerful technologies like AI.

The fact is that our 20th century ways of regulating—with complex legalese, bureaucratic procedures, and expensive litigation—are no longer fit-for-purpose. They are too slow, expensive, and ill-informed. Plus, they operate at national scale when AI is inherently global.

But there is another way. I call it regulatory technology: technologies that regulate other technologies. Corporations already use regulatory technology to protect their interests—blocking unauthorized software downloads automatically, for example. We need to get more of these technologies built to protect our interests.

I’m already seeing technologies emerge to keep AI in check. A team from U of T engineering, for example, is building AI that automatically removes personal information from documents so companies can share data without invading privacy.

A number of startups build AI that detect bias in automated decision systems. Researchers are using AI to detect which jobs are at risk—so we can build smart systems to retrain workers and ensure they receive their deserved benefits and protections.

I’m working with teams around the globe to build new legal infrastructure for privacy-protective data sharing, certification methods that automate and streamline regulatory compliance, and technologies that audit the complex systems inside AI companies.

I’m working on getting these technologies under democratic oversight while taking advantage of the innovative power of markets.

The secret to a future where we don’t fear AI is being bold about new ideas for regulating AI. I’m looking forward to the day when young entrepreneurs are boasting about building the latest technology to make sure that AI is safe and democratic. That day is on the horizon.

This text was originally published in the Toronto Star on April 17, 2021.


About the author

Gillian K. Hadfield is director of the Schwartz Reisman Institute for Technology and Society and professor of law and economics at the University of Toronto. Her book, Rules for a Flat World, is now available on Audible.


Browse stories by tag:

Related Posts

 
Previous
Previous

Moving away from AI ethics as “window-dressing” to scientifically informed policies

Next
Next

SRI and the Rockefeller Foundation partner on building solutions for AI governance