Buyer beware of the generative AI bandwagon

 

The potentials of generative AI technologies are exciting, but they come with new sets of risks, observes SRI Policy Lead Phil Dawson, and enterprises must develop new strategies for third-party risk management to navigate the emergent challenges associated with these systems. Image: DALL·E 2.


The public emergence of generative AI over the past few months has revealed how, for better and for worse, large language models (LLMs) like ChatGPT are poised to revolutionize significant portions of our economy. Text-to-image models like DALL·E, as well as chatbots like GPT-4Claude, and others can provide individuals and businesses alike with a number of benefits, for example, by automating a broad range of content generation or accelerating research and discovery. Most sectors have already begun adopting these tools, from education, academia, and scientific research to customer service, marketing, and the creative arts. Even traditionally conservative industries, like legal practice, banking, and insurance, are exploring the potential to increase productivity or enhance their offerings through generative AI.

While the market has high hopes for generative AI—venture capital firms have increased investment in the sector by 425% over the last three years—today, these systems and the products they underpin are far from perfect. Image generators return unexpected, undesired, or altogether bizarre results. Chatbots can provide false information and increasingly predictable, formulaic responses. Moreover, people have quickly overcome guardrails imposed by their creators to prevent misuse: users have produced violent and sexual imagery mimicking real-world people; and various workarounds to ChatGPT’s content restrictions have emerged. From the legality of their origins and their dubious outputs, to their toxic content and overall “steerability”—if you’ve read any “unhinged Bing” articles you should understand this concept—important questions abound about the reliability and safety of LLMs in the near and long term.

Last week, OpenAI CEO’s commented that he and his colleagues are “a little bit scared” about all this. This follows public remarks from he and the company’s CTO that, on account of the vulnerabilities of these systems, generative AI should be regulated and, potentially, subject to pre-release independent audits—an idea that regulators and policy leaders have been weighing carefully for months.

Risky business: Large language models are riding a wave of innovation

OpenAI is not the only LLM developer to highlight the risks of LLMs. As part of an article outlining core views on AI safety, a rival company, Anthropic, indicated that it plans to allow an independent, external organization to evaluate its models’ capabilities and safety implications in future releases. The article also includes a number of thoughtful observations about AI safety that should make us think more critically about AI risk. One of Anthropic’s hypotheses about AI’s future capabilities is particularly striking: “we do not yet have a solid understanding of how to ensure that these powerful systems are robustly aligned with human values so that we can be confident that there is a minimal risk of catastrophic failures.” 

Essentially, even the creators of these models readily admit that they don’t understand them enough to know how to control them, and avoid “catastrophic failures.” This is indeed a little scary, considering that rapid developments in computing power and LLM innovation will only accelerate AI capabilities. 

Measuring AI’s future risks is a hotly debated topic. For now, the bottom line is that, while these tools are powerful, they remain unpredictable, and, in some cases, continue to exhibit significant weaknesses and risks that should make them challenging for even the most sophisticated of enterprises to adopt safely. 

 

Advances in generative AI are exciting, but the pace of their development should make us question standards for releasing large language models, writes SRI Policy Lead Phil Dawson. Image generated by Midjourney.

 

Despite these risks, companies face increasing pressure to buy generative AI products given the potential efficiencies and competitive advantages they may bring. They also fear getting behind competitors that are choosing to plow ahead anyway. The flurry of recent releases and large-scale integrations by companies like Quora, Microsoft, DuckDuckGo, and others likely only heightens this sentiment. And if being behind is a daunting prospect under normal circumstances, it’s even less palatable amidst the worsening economic conditions that businesses may be facing. So, while some companies may ultimately decide that the risks associated with generative AI outweigh the potential rewards, it’s likely that others will hop on the bandwagon. 

Given the potential for financial and reputational damages, liability for copyright and privacy violations, or non-compliance with emerging regulations, companies looking to procure generative AI systems should beware their potential safety and ethical impacts very seriously. While it may be challenging, fortunately, there are measures companies can take to help minimize risk. First, companies should implement appropriate internal AI governance and risk management processes. Adopting AI policies and practices that enable an organization to identify, analyze, and manage the context-specific risks associated with generative AI applications is critical to assuring safe, fair, and compliant adoption and use.

Guidance for AI governance: Audits, assessments, analysis

There is a lot to consider, and for those new to this space, it can seem overwhelming. Helpful guidance for undertaking the demanding work of implementing appropriate AI governance mechanisms can be found in industry standards such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, the International Organization for Standardization’s (ISO) guidance on risk management for AI (ISO/IEC 23894), or the Organisation for Economic Co-operation and Development’s (OECD) guidance on governing and managing AI risks throughout a system’s lifecycle. An AI maturity assessment can help companies identify governance gaps and resulting mitigation measures needed to enhance their ability to manage and minimize generative AI risk. AI certification schemes being developed by the Responsible AI Institute are another great avenue to explore. 

Second, companies should undertake independent audits or risk assessments of any products prior to purchase. Audits of LLMs and generative AI are complex, and, for many companies, they will require outsourcing to independent service providers with the expertise and technology required to properly assess the quality and reliability of a given model. Alternatively, companies could begin asking their vendors to submit the results of independent audits they have undertaken. 

From a process standpoint, much of this is second nature to large enterprises. Most already undertake rigorous vetting of technology vendors and their products in other contexts as part of third-party risk management (TPRM)—for instance, to assess compliance with information security frameworks like SOC-2 or ISO 2700 through a third party like Vanta or OneTrust. And some enterprises have already begun adapting their TPRM practices to account for the unique risks posed by AI, requiring vendors to submit to independent model assessments and fill out Responsible AI questionnaires. For those who have yet to do so, the procurement of generative AI products is probably a good opportunity to start. 

Understanding the scope of an independent generative AI audit is important if these measures are to be effective. Critically, audits or risk assessments should focus on both the underlying LLM and any additional models or applications built on top. The reason is that it’s important to understand the unique strengths and limitations of the LLM, which may have downstream impacts at the application level. This can be done through rigorous performance, fairness, and robustness testing and will likely require custom approaches. 

Depending on the context, audits of the generative AI application could consider a range of factors relevant to evaluate the quality and reliability of model outputs: for example, the suitability of the projected tone or the accuracy and length of responses, or the potential for harmful content such as hate speech or toxic, racist, discriminatory, or misogynistic language. Ultimately, audits of generative AI applications should evaluate whether outputs are safe and accurate and adhere to appropriate conversational norms as determined by the parameters of the use case and stakeholders.

Finally, generative AI audits should also include analysis of technical documentation relating to the datasets used to train, test, and validate the models. Companies can ask their vendors to submit this information. Researchers and industry practitioners have highlighted the lack of transparency of these systems, and, in particular, the reluctance of LLM developers to divulge information about the datasets used to train them. According to one research scientist at Hugging Face, based on how little was disclosed about ChatGPT, it could well have been “three raccoons in a trench coat.” To promote transparency and assurance in the space, researchers at Oxford University recently proposed a high-level framework for auditing LLMs that can be applied to the generative AI context.

Advances in generative AI are exciting. Many people have already found a number of favourite applications to help write simple messages like emails, sophisticated texts in their domain of expertise, or images and slides to support presentations. This is all to say nothing of the potential for these innovations to advance scientific discovery and other, more significant, societal outcomes. 

At the same time, the pace of development should make us question assumptions about the standards for releasing LLMs. Use cases that appear benign today may prove to be less so in the not-so-distant future. And adoption at scale for complex and sensitive uses is going to be extremely challenging, fraught with potential for hidden biases – or even, per their developers, catastrophic damages. While governments contemplate new rules, companies should start preparing to mitigate these risks today.

This article originally appeared on Armilla AI on March 20, 2023, and is republished with permission.

Want to learn more?


About the author

Phil Dawson is a lawyer and public policy advisor specializing in the governance of digital technologies and artificial intelligence. After beginning his career in litigation, Dawson held senior policy roles at a United Nations specialized agency, in government, and at a global AI software company. He has advised government departments, international organizations, research and advocacy organizations, non-profits, and private companies on a range of digital and AI policy issues, including responsible AI, foreign policy and global governance, tech and human rights, national AI strategy development, standardization, and international trade. He recently served as co-chair of the Canadian Data Governance Standardization Collaborative, a national multi-stakeholder effort launched under the government’s Digital Charter to produce a roadmap of standards needed to support responsible innovation. He is also a member of the Standards Council of Canada’s National Standards Strategy Advisory Committee. Internationally, Dawson is an active member of the OECD.AI Network of Experts, a member of the UN Global Pulse Expert Group on the Governance of Data and AI, and a former member of the World Economic Forum’s Global Council on the Future of Human Rights and the Fourth Industrial Revolution.


Browse stories by tag:

Related Posts

 
Previous
Previous

Karina Vold recognized with AI2050 Early Career Fellowship

Next
Next

SRI and Munk School partner to host inaugural Toronto Public Tech Workshop