Global group of experts advises on concrete steps towards a robust AI certification ecosystem

 

In a new report published by the Schwartz Reisman Institute for Technology and Society, the global group of experts that make up the the Certification Working Group (CWG) explores the necessary elements of an ecosystem that can deliver effective certification to support artificial intelligence that is responsible, trustworthy, ethical, and fair. Read the report (PDF).


We regularly place trust in a wide variety of things. We trust that a local restaurant’s food is safe to eat, that the lightbulbs in our homes won’t cause a fire, and that the vehicles in which we travel are relatively secure and reliable.

Why do we trust these things? Trust in the products, processes, and technologies we encounter every day stems from a combination of factors including regulation, industry certifications, accreditation, and standards. While there aren’t laws governing every single facet of the chains that deliver products and services, we at least know that, say, the person who handles food at a restaurant has undergone food safety training and accreditation, ensuring their compliance with rigorous standards and protocols.

Whenever new technologies enter the world, they must earn trust. So how can we create trust in the extensive suite of artificial intelligence (AI) systems that are rapidly permeating every facet of our lives right now? 

A new report from the Certification Working Group (CWG) assembled by the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto, the Responsible AI Institute, and the World Economic Forum’s Centre for the Fourth Industrial Revolution explores the necessary elements of an ecosystem that can deliver effective certification to support AI that is responsible, trustworthy, ethical, and fair.

➦ Read the report: ARTIFICIAL INTELLIGENCE CERTIFICATION: UNLOCKING THE POWER OF AI THROUGH INNOVATION AND TRUST (PDF)

“Artificial intelligence certification: Unlocking the power of AI through innovation and trust” (2024) explores the necessary elements of an ecosystem that can deliver effective certification to support AI that is responsible, trustworthy, ethical, and fair.

What is certification?

Certification is a process through which an independent body attests that an organization or its personnel, systems, or products meet specified objective standards or requirements. Certification can foster innovation, create guardrails, and keep humans at the centre of technological advancements.

 

How can we build an AI certification ecosystem?

Regulation has a key role to play in ensuring that AI systems are trustworthy and don’t cause harm, but a truly effective ecosystem is one which uses the range of trust and transparency tools available. Yet certification mechanisms for AI, which utilize this range of tools, have received limited attention in policy and academic circles. 

Through this report, the CWG aims to right this, combining academic, government, NGO, and corporate backgrounds in emerging technologies, law and policy, governance, evaluation, engineering, audits, standards, and certification to arrive at a set of four key recommendations for certification.

Recommendation 1: Government must lead on establishing objectives, resources, and funding

While standards and certification are useful for supporting societal objectives, the decisions about what those objectives ought to be shouldn’t be made in standards-setting rooms.

Instead, governments have a responsibility to clearly state what societal objectives should be pursued, and to support these objectives by providing resources. Otherwise, developers and implementers may choose whatever frameworks best suit them. A critical part of government’s role in objective-setting is investing in internal capabilities: hiring and supporting workers who understand AI and the associated landscape.

Recommendation 2: Government and other organizations with large procurement budgets should support market development

Government could generate market demand for certification through regulatory measures and procurement practices. It could also mandate certification for AI systems in high-risk scenarios and signal a preference for certified systems in procurement processes.

Key government agencies could have a big impact on creating momentum and influencing the market by signaling demand (even informally). Further, the influential role of large companies in shaping the demand for AI certification is not insignificant; the CWG report notes the emerging role of the private sector in the form of things like AI insurance to contributing to market development by offering AI certification as part of their services.

Recommendation 3: All stakeholders must invest time and resources to get the foundations in place

Establishing the foundation necessary for effective AI certification will take significant effort. There’s the ongoing task of developing clear standards against which AI systems can be certified. There’s also the need for a comprehensive set of internationally recognized documents to support certification. All of this will take time and resources.

While progress has been made in creating some standards, the CWG report highlights the need for further work to develop tools like standardized impact assessments and enable certifications beyond an organization's AI management system. 

On this point, the CWG advocates for joint certifications that integrate management system and product certification attributes, especially in the context of evolving technologies like generative AI. The report stresses the importance of collaboration among certification experts to establish rules that make sense, and calls for clarity on the intended use of frameworks for conformity assessment to avoid misuse or manipulation of these frameworks.

Recommendation 4: stakeholders should move quickly to advance the state of the art from these foundations

How can various stakeholders act now to support the development of a certification ecosystem built on the existing foundation?

Recommendations on this topic include: building transparency and data availability into the next generation of standards; developing a reference architecture to serve AI policy and certification; and developing a research agenda to advance AI verification and validation tools for certification. On this point, the CWG offers a list of some of the ways in which the unique nature of AI and advanced technologies may complicate its compatibility with existing certification and conformity assessment processes.

Towards an AI certification ecosystem: where do we go from here?

Our relationship with AI is in its infancy. But developing trust in AI and other technologies is not for regulators and legislators alone. 

Building trust in AI will demand legal frameworks crafted by governments and shaped by open discussion among all who have a stake in the outcome: citizens, business leaders, advocates, and academics, to name a few. Yet it will also require a broad network of additional, complementary tools that go beyond traditional law.

The CWG, via this report, aims to provide a succinct and useful guide to the current state of the certification ecosystem, its gaps, and where it can be improved. The CWG and the Schwartz Reisman Institute for Technology and Society, publisher of the CWG report, extend their support for this work and welcome commentary and feedback at sri.policy@utoronto.ca

➦ Read the report: ARTIFICIAL INTELLIGENCE CERTIFICATION: UNLOCKING THE POWER OF AI THROUGH INNOVATION AND TRUST (PDF)

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

A possible future for expanding cognition: Ted Chiang shares thoughts on being a cyborg

Next
Next

SRI’s annual conference, Absolutely Interdisciplinary, returns in May of 2024