SRI and the Rockefeller Foundation partner on building solutions for AI governance

 
Thinking
Investigating

Innovating AI Governance: Bold Action and Novel Approaches is an ongoing series of workshops developed by the Schwartz Reisman Institute for Technology and Society in collaboration with the Rockefeller Foundation.


AI and related technologies are increasingly being implemented in all aspects of society, from healthcare to marketing, retail, robotics, financial and weather forecasting, national security, transportation, and far beyond.

Since AI is playing a bigger role in our lives every day, appropriate governance of AI is urgently needed to ensure that its benefits to humanity outweigh its risks of causing harm. Moreover, good AI governance can also help to evolve our legal and regulatory systems so that they do not pointlessly impede AI innovation.

To tackle these grand challenges, the Schwartz Reisman Institute for Technology and Society (SRI) is collaborating with the Rockefeller Foundation on an ongoing initiative entitled “Innovating AI Governance.” The initiative launched in December of 2020 with its first event: a virtual symposium organized in partnership with the Center for Advanced Studies in the Behavioral Sciences (CASBS) at Stanford University.

This event bridged diverse networks to foster a common understanding of the landscape, raised questions and tensions for consideration, and identified where and how issues can be collectively addressed.

“We already know that the management of AI is not just a technical challenge, but a societal one as well,” says Jamison Steeve, Senior Advisor - Policy, Strategy and Solutions at SRI. “That’s why we’re convening experts and stakeholders from a wide variety of sectors and industries to troubleshoot these complex problems together.”

While voluntary AI ethics principles have been extensively discussed and drafted across sectors and industries worldwide—such as by the OECD, UNESCO, the EU, Google, and Microsoft to name a few, as well as technologists and academics—they’ve tended to focus on high-level principles and ethical guidelines, rather than tangible governance solutions.

Very few such solutions have actually been implemented.

“We’ve seen enormous demand in government, industry, and civil society for these kinds of tangible regulatory solutions,” says Steeve, “and that’s exactly what the solutions stream at the Schwartz Reisman Institute aims to meet. We want to convene partners in a cooperative atmosphere to build new models of governance—ones which are badly needed, and which reimagine the relationship between government, citizens, and the private sector.”

In the quickly growing development and deployment of AI, appropriate ethical norms, governance structures, and institutional arrangements are key to ensuring that AI works for the benefit of humanity, that these benefits are distributed fairly, and that AI technologies don’t undermine human autonomy and self-determination.

The “Innovating AI Governance” initiative’s next step is to adapt Schwartz Reisman’s solutions workshop design for a virtual venue in order to tackle practical solutions for real world AI governance issues.

“Our ambition is to catalyze the research, experimentation, and investment needed to build governance models for responsible AI that can be adopted globally across jurisdictions, economies, and sociopolitical contexts,” says Steeve.

“Our process is meant to move ideas from the hypothesis stage into action, and we’re really excited to re-convene with our partners for another productive session in the spring of 2021.”


Want to learn more?

With images courtesy of the Rockefeller Foundation.


UPDATE (August 25, 2021): A new paper from Stanford’s Human-Centered Artificial Intelligence (HAI) explores the opportunities and risks of large-scale “foundation models” used in the development of AI, which currently sit beyond the reach of oversight and regulation. One of the prototypes developed by SRI and Rockefeller’s workshop described below is a pilot project to develop an open-source audit layer that will allow regulators to transparently assess AI foundation models. A report on this prototype and other outcomes of this workshop is forthcoming from the Schwartz Reisman Institute.


Browse stories by tag:

Related Posts

 
Previous
Previous

Gillian K. Hadfield: Regulatory technologies can solve the problem of AI

Next
Next

SRI graduate fellows invite submissions for 2021 Grad Workshop, “Views on Techno-Utopia”