Trustworthy AI: Lessons from recent experience

 
Schwartz Reisman Engineering Lead Ron Bodkin says the European Commission’s recent white paper on AI falls short in assessing risk and proposing adequate regulation.

Schwartz Reisman Engineering Lead Ron Bodkin says the European Commission’s recent white paper on AI falls short in assessing risk and proposing adequate regulation.


In February 2020, the European Commission published a white paper, “On Artificial Intelligence - A European approach to excellence and trust,” with the goal of setting out policy options to promote the uptake of AI while also addressing the risks associated with some of its uses.

While the effort is “laudable,” says Schwartz Reisman Engineering Lead Ron Bodkin, he notes that the paper does not present policy options to address real, concrete problems that have already been observed in some AI systems, nor does it propose accompanying regulation that can succeed in changing behaviour. Moreover, Bodkin says the paper’s definition of “high risk areas” of AI is much too narrow.

In a short feedback piece on the EU’s white paper, Bodkin outlines the ways in which the EU’s paper falls short, and proposes amendments that could address the breadth of potential risks and harms in a world rapidly implementing AI across all aspects of life.

What’s the scope of the risk we face with AI?

The EU’s white paper emphasizes particular areas in which the risk of harms caused by AI should be monitored: things like fundamental rights (privacy, safety, non-discrimination) as well as safety and liability-related issues.

But Bodkin highlights the need to look at “evidence of current harm” to guide our conception of the scope of harms, rather than simply hypothesizing categories of harms and then “comparing AI systems with perfection.”

He also points out that the criteria used to identify where high-risk AI applications may appear is too narrow: the white paper draws boundaries around sectors in which significant risks can be expected to occur and particular ways of using AI that are likely to create significant risks.

This means we may be looking for harms only where and when we expect to see them.

Instead, Bodkin describes a number of real-world cases “that should be addressed urgently” because they “violate fundamental rights but not in the areas emphasized in the [EU’s] definition.” 

For example, there are many areas of media and commerce in which AI harms are not necessarily anticipated as misfires, but still regularly occur:

  • In digital advertising, social media, and gaming, algorithms manipulate and deceive users to create addictive behaviour. Studies have shown this results in increased feelings of isolation and depression, especially among teenagers.

  • In politics, well-known cases of algorithmic election manipulation have been observed, such as the highly-publicized targeted voter suppression wrought by Cambridge Analytica.

  • In the consumer sphere, AI-manipulated product search results and targeted recommendations maximize profit while often misleading consumers or harnessing detailed aspects of their personal lives to push purchasing behaviour. 

  • Overtly false information and extremist content that promotes hate, oppression, and violence are most often spread by AI tools that seek to maximize engagement at the cost of accuracy.

All of these very real and measurably harmful initiatives are, in Bodkin’s view, a better barometer of where we should focus our energies than simply looking at hypothetical cases of how AI might do wrong or cause harm as an incidental side-effect of its functioning.

What are appropriate rules and regulations for AI? How should we enforce them?

In Bodkin’s view, the EU white paper “implies that voluntary guidelines for self-regulation will suffice”—a stance he sees as unsatisfactory. 

Rather than relying on sector-by-sector regulators to impose rules upon themselves with varying degrees of strictness and success, Bodkin advocates for, among other things: data protection infrastructures mandated by states, and private sector innovation and efficiencies such as AI auditing processes that would aid public regulators’ responsiveness to rapid developments in the tools they regulate. 

On this last point, Bodkin also points to Schwartz Reisman Director Gillian K. Hadfield’s notion of ‘regulatory markets, developed in collaboration with Jack Clark of OpenAI. “Having more scope for private innovation to advance rules would be of great value,” says Bodkin, “given the speed of change in this sector and challenges in recruiting top talent for regulatory purposes.”

No doubt, the speed of AI’s evolution is so fast as to outpace its regulation—which is why Bodkin favours equipping auditors who “meet standards for expertise” and can “certify best practices based on state of the art capabilities,” rather than simply “requiring frequent updates to dynamic regulations.”

In other words, creating expert-informed rules that apply across the board to AI’s current capabilities and likely future directions is certainly a better and more efficient approach than playing ‘catch-up’ while AI advances faster than we can possibly monitor it.

AI regulation would no doubt be a complex system of not only expert auditors but also changes to our legal, political, and economic systems. Not least of all, we must think of AI safety and responsible AI as a whole-society issue, rather than simply a problem created and solved by technologists. 

As part of this holistic view, Bodkin suggests training multiple stakeholders in AI’s capabilities and its regulation, including business leaders, product managers, user experience professionals, social scientists, and risk and compliance professionals.

All of these complex problems, taken together, lead Bodkin to conclude that while AI moves fast, we need to figure out ways to keep up; while AI holds the promise of great benefit to society, we must strike a balance between risk and cost of regulation; and while our media systems and democracies evolve beyond recognition, we need to stay tuned and strive to understand them—now more than ever.

Want to learn more?


Ron Bodkin is the VP of AI Engineering and CIO at Vector Institute and is the Engineering Lead at the Schwartz Reisman Institute for Technology and Society. Previously, Bodkin was responsible for Applied Artificial Intelligence in the Google Cloud CTO and was the founding CEO of Think Big Analytics—an enterprise data science and engineering service—and the creator of artificial intelligence incubator at Teradata. Bodkin holds an honours BSc in math and computer science from McGill University and a master’s degree in computer science from MIT.


Browse stories by tag:

Related Posts

 
Previous
Previous

AI can substantially improve economic analyses: Marlène Koffi in The Hill Times

Next
Next

Matt Ratto recognized with Award of Excellence for COVID-19 response