ChatGPT is a “game changer” for artificial intelligence
Imagine a near-future where it is impossible to tell if a piece of writing was produced by a human or a machine. What are the consequences?
Since its release in November 2022, debates about ChatGPT are everywhere, raising new questions about how artificial intelligence (AI) systems will transform the ways humans communicate with each other and interact with information.
What is ChatGPT?
ChatGPT is an app developed by OpenAI, built using the company’s GPT-3 large language models (LLMs), which are trained on enormous datasets of human-generated text. The system can generate a wide range of outputs—from summaries of news articles and research papers, to chatbot dialogue, language translations, computer code, technical documentation, and even creative writing—all from simple user prompts.
ChatGPT’s ability to respond in ways that are eerily similar to human linguistic capabilities has sparked headlines and concerns around the potentials of LLMs. Will AI soon be writing all our emails, and even news stories like this one? How will LLMs change our approach to education, or affect the job market and the role of human workers? What privacy and security risks should we take into account when developing and using AI-powered language models? And how will ChatGPT’s success affect the ways larger firms like Google or Meta develop their own models?
A “game changer” for AI systems
With more than one million users within the first five days of its launch, followed by news that Microsoft will invest US $10 billion in OpenAI in January 2023, ChatGPT is a transformative development in AI natural language processing.
In a December 2022 appearance on Bloomberg BNN, the Schwartz Reisman Institute’s Director and Chair Gillian Hadfield, a professor of law and strategic management at the University of Toronto, called ChatGPT a “game changer.” Hadfield hailed the system’s conversational ability to respond to human feedback as a breakthrough technology, while cautioning that its advances also raise important legal and regulatory issues.
Hadfield pointed to questions around the data informing ChatGPT’s output, given that it is drawn from text from across the internet, as a cause for ongoing concern. While some critics argue that ChatGPT plagiarizes existing material, Hadfield noted the system does not pull exact text in the manner of current internet search programs, but instead performs an act of synthesis. “We should find ways for the creators of content and knowledge to benefit from it,” said Hadfield. “I don’t see it as theft—I see it as social capital.”
A powerful and disruptive new cultural technology
Hadfield further discussed ChatGPT’s ethical and social implications on an episode of TVO’s The Agenda with Steve Paikin, alongside Alison Gopnik, professor of psychology at UC Berkeley, and Gary Marcus, professor emeritus of psychology and neural science at New York University.
The panelists agreed ChatGPT is a powerful cultural technology with the potential to transform the ways we think, but that it doesn’t have the capacity to understand like a human does.
Comparing it to a library of information, Gopnik said that asking whether ChatGPT can think is the wrong question, observing that the system’s ability to summarize information might orient us to view forms of original and creative thinking among humans more highly.
Marcus emphasised the importance of the general public learning how to properly use ChatGPT, arguing that it ought to come with a warning label and highlighting that the system can make facts up or generate basic mistakes in comprehension. He argued that while ChatGPT gives an illusion of human understanding, it does not actually comprehend its output, and could therefore be used to maliciously generate and spread misinformation.
Hadfield called ChatGPT a “general purpose technology” with many potential diverse applications—including teachers using it to generate assignments, and researchers using it to summarize information—and that it will be used in all kinds of domains across the board, much like electricity. “We don’t want to let the risks drown out all the advances we can expect from this technology,” Hadfield said.
Noting that ChatGPT has the potential to dramatically disrupt the way universities conduct evaluation and teaching, Hadfield suggested students and teachers learn how to use it productively, and that techniques to detect the use of language models will be required in response to concerns around cheating.
The panelists all agreed on the importance of ChatGPT being used for the betterment of society, which means there is a pressing need for educating the public on its use and limitations, as well as a potential market for new kinds of AI that are able to process whether or not a statement is valid relative to widely held truths.
Given its varied and far-reaching applications, it's crucial that we consider the ethical and legal implications of systems like ChatGPT to ensure they are used productively. Educators, researchers, businesses, and the general public will all be impacted by this technology, and need to actively work together to better understand its value and implications.