New AI Audit Challenge seeks designs for detecting bias in AI systems

 
A diagram with various shapes is inspected by a magnifying glass.

Evaluating AI risks is a crucial and complex task, yet policymakers lack tools to effectively investigate potential harms and biases within AI systems. In response, Stanford’s Institute for Human-Centered AI and Cyber Policy Center have launched an AI Audit Challenge that calls for solutions, tools, datasets, or models that enable effective audits of AI systems. The challenge grew out of an SRI Solutions Workshop on AI governance held in partnership with the Rockefeller Foundation in 2021.


How can we better audit artificial intelligence (AI) systems to remove bias and discrimination? As AI technologies are increasingly deployed across all aspects of society, this question is essential to ensuring that we live in a world that benefits from powerful new technologies, while also ensuring justice for all.

To date, tools that can assess the fairness traits of an AI system have lagged in development while other technologies that draw on the immense potentials of deep learning for applications like computer vision and natural language have become increasingly powerful. However, for the successful and effective use of these AI systems across society, policymakers and technology developers must work together to identify their limitations and risks.

In response to this regulatory challenge, Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) and Cyber Policy Center recently launched the AI Audit Challenge, a new competition that calls for solutions, tools, datasets, or models that enable regulators, journalists, and civil society to effectively audit deployed AI systems and open-source models.

The AI Audit Challenge is open to any person or group to submit proposals for applied tools that can assess whether deployed AI systems exhibit bias or carry potential for discrimination. Winning submissions will demonstrate how technical tools can be used to make it easier for humans to audit deployed AI systems or open-source models, or develop tools that detect bias and discrimination based on race, gender, age, disability, and other protected traits.

The multi-prize competition, which offers a total of $71,000 in awards, is chaired by SRI Advisory Board Member Jack Clark, Stanford HAI Faculty Associate Director Rob Reich, and SRI Advisory Board Member Marietje Schaake, and features a competition jury and advisory board composed of academics, policymakers, programmers, and technologists, including the Schwartz Reisman Institute’s Director and Chair Gillian Hadfield. A full list of participants is available on Stanford HAI’s website.

 
Jack Clark, Rob Reich, and Marietje Schaake

Jack Clark, Rob Reich, and Marietje Schaake co-chair Stanford’s new AI Audit Challenge.

 

Inspiration at an SRI Solutions Workshop

The AI Audit Challenge grew out of a series of workshops held by the Schwartz Reisman Institute for Technology and Society (SRI) in partnership with the Rockefeller Foundation in the spring of 2021. Entitled “Innovating AI Governance,” the workshops bridged diverse networks to develop new questions for how AI governance can be applied through tangible practices, rather than high-level principles and voluntary ethics guidelines, which are often difficult to implement on a practical level.

Using SRI’s unique Solutions methodology, workshop participants were guided through a series of design thinking exercises structured to generate a concrete solution or prototype in response to a question concerning AI governance. In the case of the AI Audit Challenge, the workshop group sought to develop a method for spurring innovation in the techniques and technologies available to audit AI systems. Through the workshop’s guidance, the group arrived at the idea of holding a contest as an optimal strategy for engaging a diverse pool of potential designers to develop new approaches.  Other initiatives launched from the workshop include an initiative to provide certification of whether an AI system is responsible, and a data stewardship model to share information between providers via a common platform in a way that fosters trust and respect for privacy.

The organizers of the AI Audit Challenge describe the initiative as “keen to catalyze and build on the larger body of work that already exists to interrogate and analyze these AI systems,” noting they are “less motivated by publishing in academic journals and instead have chosen to prioritize impact through applied investigations, tools, and demonstrations.”

“It’s exciting to see this initiative move from a concept surfaced in an SRI workshop to a reality,” said Hadfield. “As part of the Audit Challenge Advisory Board, I look forward to seeing what will hopefully be a wide range of submissions that tackle the question of how to appropriately audit AI systems in innovative new ways.”

Contest timeline and details

Submissions to the AI Audit Challenge can be made until October 10, 2022, after which they will be evaluated by the jury. Participants will have the opportunity to iterate their work through workshops, and to receive advice and support from the advisory board. Two first place winners will receive $25,000, with additional awards for second and third place.

Ideas and proposals are welcome from all sources, sectors and types of organizations including for-profit, not-for-profit, or private companies, and can come from partnerships between several organizations and/or various countries.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

AI provides new insights into social learning

Next
Next

2022 SRI Graduate Workshop explores “Technologies of Trust”