New report from AI100 Study Panel examines biggest promises and most significant challenges of AI

 
The recently published AI100 report features contributions from SRI Director Gillian Hadfield, who serves on the AI100 Study Panel, and SRI Associate Director Sheila McIlraith, who is a member of the initiative’s Standing Committee.

The recently published AI100 report features contributions from SRI Director Gillian Hadfield, who serves on the AI100 Study Panel, and SRI Associate Director Sheila McIlraith, who is a member of the initiative’s Standing Committee.


Launched in 2014, the 100-Year Study on Artificial Intelligence (AI100) convenes leading thinkers from across a wide spectrum of fields to examine how the effects of AI will ripple through every aspect of how people work, live, and play.

AI100 released its first report in 2016, and its second was published on September 16, 2021. Conceived as a longitudinal study, AI100 plans to publish reports every five years for at least a century.

Read: Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.

Each report is written by an AI100 Study Panel of core multi-disciplinary researchers who examine the current state of the AI field, technical and societal challenges, and what lies ahead. The latest report’s Study Panel includes Schwartz Reisman Institute Director Gillian Hadfield, a scholar of law and economics specializing in AI regulation, modeling human normativity, and working with technologists to build AI that is aligned with human values.

SRI Associate Director Sheila McIlraith, a computer scientist specializing in AI decision-making and human-compatible AI, is also involved in AI100 as a member of the initiative’s Standing Committee, which oversees the study and its reports.

“It’s very exciting to see the latest AI100 report finally out in the world,” says Hadfield. “The first report in 2016 was very focused on technical progress and treated risks and challenges only briefly. This year, a key shift is much greater concern about the potential risks and challenges of AI, which aligns well with our focus at SRI: how do we make sure AI is good for the world?”

“We’re also starting to see a greater integration of people from other disciplines outside of computer science involved in the AI ecosystem,” says Hadfield, “which is very promising. This aligns well with the deep interdisciplinarity we’re fostering at SRI between scholars of all stripes interested in the broad implications of AI for human life.”

McIlraith echoes Hadfield’s focus on interdisciplinary collaboration as a key driver of AI research.

“AI has gone beyond just being in the lab, and, as such, has the potential to affect many aspects of our lives,” says McIlraith. “This broadens the scope of study, causing experts from many diverse fields to scrutinize its potential and impact. I've found the interdisciplinarity in both AI100 and the Schwartz Reisman Institute to be such an enriching experience. I’ve reflected on my discipline in a different way, my eyes have opened to exploring new questions, and I’ve looked at questions I’ve been exploring through a different lens.”

McIlraith also points to another key element of the AI100 study: that it’s longitudinal.

“As [AI100 founder] Eric Horvitz observed: one report provides a datapoint, but two reports form a line,” says McIlraith. “This suggests a direction of change that can be monitored over time. The study report reflects not only on the state of AI now, but it also looks back at the previous report from 2016, as well as establishing questions that we'll be able to use and modify over time to track how AI is impacting the world.”

Whereas the first AI100 report focused on the impact of AI in North American cities, the participants aimed for this new study to explore the impact that AI is having on people and societies worldwide.

With those goals in mind, the new report profiles the findings of two workshops commissioned by AI100’s Standing Committee: one on “Prediction in Practice,” which studied the use of AI-driven predictions of human behaviour, and another on “Coding Caring,” addressing the challenges and opportunities of incorporating AI technologies into the process of humans caring for one another and the role that gender and labour play in addressing the pressing need for innovation in healthcare.

“The study report reflects not only on the state of AI now, but it also looks back at the previous report from 2016, as well as establishing questions that we'll be able to use and modify over time to track how AI is impacting the world.”

Along with the findings of the workshops, the report is structured around 12 standing questions (SQs). Hadfield, in particular, took the lead on SQ3, which she notes “reflects a shift in what we are facing, moving from simply striving to meet AI performance benchmarks to outlining fundamental goals around AI’s cooperation with humans and normativity.”

AI100 Standing Questions

  • SQ1. What are some examples of pictures that reflect important progress in AI and its influences?

  • SQ2. What are the most important advances in AI?

  • SQ3. What are the most inspiring open grand challenge problems?

  • SQ4. How much have we progressed in understanding the key mysteries of human intelligence?

  • SQ5. What are the prospects for more general artificial intelligence?

  • SQ6. How has public sentiment towards AI evolved, and how should we inform/educate the public?

  • SQ7. How should governments act to ensure AI is developed and used responsibly?

  • SQ8. What should the roles of academia and industry be, respectively, in the development and deployment of AI technologies and the study of the impacts of AI?

  • SQ9. What are the most promising opportunities for AI?

  • SQ10. What are the most pressing dangers of AI?

  • SQ11. How has AI impacted socioeconomic relationships?

  • SQ12. Does it appear “building in how we think” works as an engineering strategy in the long run?

Like the first report, the second aims to address four audiences simultaneously—the general public, industry, government, and researchers—striving for broad accessibility across expertise levels and areas of specialization.

Along with Hadfield, the AI100 Study Panel is composed of: Michael L. Littman (chair), Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, Toby Walsh.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

New cohort of SRI graduate fellows expand research to digital labour, blockchain, morality, international security, and more

Next
Next

Four new SRI faculty fellows expand research to robotics, decolonialism, “moral machines,” and human rights