Thinking inside and outside AI: Reflections on ChatGPT and the future of education

 
An illustration of a human figure with a neural network pattern overlaid on their head.

As artificial intelligence (AI) continues to advance, educators are rapidly being presented with opportunities and challenges they were largely unprepared for. To explore the impacts of tools like ChatGPT on the classroom, the Schwartz Reisman Institute for Technology and Society and the Centre for Ethics at the University of Toronto co-hosted a symposium on the future of education in the age of generative AI.


When ChatGPT garnered sudden fame following its release in November 2022, educators were presented with new challenges they were largely unprepared for. While not the first artificial intelligence (AI) application with consequences for the classroom, ChatGPT’s ability to easily produce convincing text sparked concerns that it would enable students to circumvent essay writing, making it much more difficult for teachers to provide accurate assessments.

Developed by OpenAI—an organization founded as a non-profit research lab in 2015 that was restructured into a for-profit in 2019—ChatGPT is part of a new wave of tools collectively known as “generative AI,” a term that emphasizes their content-creating functions. Other examples of generative AI systems include image generators like DALL·E, Midjourney, and Stable Diffusion, code generator tools like Copilot, and audio generators like Resemble

ChatGPT is a large language model (LLM) trained on an immense corpus of text that enables users to converse with it in the form of prompts. The “GPT” stands for generative pre-trained transformer—a type of neural network that learns context and scores the significance of each component of its dataset accordingly. This method is used in natural language processing to generate text that resembles human writing and is augmented via feedback from human trainers, who help fine-tune LLMs by ranking the quality of their outputs. ChatGPT is currently freely available to the public, as the system is in its feedback-collection research phase. 

With the capacity to produce seemingly eloquent arguments and political or religious musings, ChatGPT is causing educators to reevaluate and reconsider methods of student assessment, learning objectives, and even their conception of what it means to learn itself. To address these issues, as well as the myriad social and ethical implications of LLMs, the Schwartz Reisman Institute for Technology and Society (SRI) and the Centre for Ethics at the University of Toronto co-hosted a symposium entitled “Generative AI and the Future of Education” on April 24, 2023, at the Ontario Institute for Studies in Education. Speakers included the Centre for Ethics’ Acting Director Lauren Bialystok, SRI Research Leads Avery Slater and Karina Vold, SRI Graduate Affiliate Elliot Creager, and independent writer and teacher Ryan Fics.

 

From left to right: “Generative AI and the Future of Education” panelists Ryan Fics, Elliot Creager, Karina Vold, Lauren Bialystok, and Avery Slater.

 

Machine learning vs. human learning

What are the differences between the modes of learning used by humans and machines? Addressing this topic from their own disciplinary perspectives, panelists offered key insights to contextualize what role generative AI might come to play in the classroom.

Elliot Creager, a PhD candidate in U of T’s Department of Computer Science, explained that machine learning can be broadly defined as “turning data into computer programs.” ChatGPT falls under the category of generative modelling, a method of unsupervised learning that uses algorithms to discover patterns in unlabelled data sets so the model can produce an output that often appears to be indistinguishable from real examples. Machine learning, Creager noted, is rooted in learning by example, and the ways datasets are used depend on how a system’s goals are specified.

Karina Vold, an assistant professor at U of T’s Institute for the History and Philosophy of Science and Technology, approached the question from the perspectives of ethics, epistemology, and philosophy. Regarding ethics, Vold noted that ChatGPT tends not to cite original sources, which raises concerns about copyright and the exploitation of the works produced by content creators. On epistemology—the study of the construction, nature, and limits of knowledge—Vold observed that if applications like ChatGPT proliferate, we would enter an ecology of noise, in which no one is held accountable for what is disseminated. As a philosopher of mind, Vold suggested that ChatGPT merely produces text without “thinking” or any deep engagement with underlying meaning in a cognitive sense. One of the goals of education is to cultivate better (human) thinkers, noted Vold, and systems like ChatGPT demonstrate a different mode of learning that deals with mathematical analytics, and lacks intentionality and consciousness.

 

ChatGPT and the purpose of education

 A second key question posed by the symposium’s moderator Lauren Bialystok, an associate professor at the Ontario Institute for Studies in Education, concerned the purpose of education. If the goal of teaching is to help people attain some of the skills and virtues we associate with humanness, how can we differentiate what tasks are uniquely human and what are not? Does the proliferation of generative AI tools signal a need for us to get over our obsession with originality and authenticity?

Ryan Fics, an independent writer, teacher, and curriculum developer, proposed we are entering “a moment of pedagogical resilience” where educators are experimenting with strategies and techniques to better prepare students for a shifting job market. In response, Fics has incorporated ChatGPT into his lesson plan by building a review process into assignments that allows students to converse with the system. Through this approach, generative AI can be used in a creative way to help students develop arguments, critically examine where ideas come from, and articulate their own views on the topics. 

Avery Slater, an assistant professor in U of T’s Department of English, suggested that we should think very carefully about how we build incentives, both in the classroom and for education in general. Reflecting on the recursive logic of natural language generation models, Slater noted that the queries we enter, our follow-up responses, and the answers formulated by ChatGPT are all fed back into the system, constituting a feedback loop through which we involuntarily contribute to the system’s continuous advancement. As ChatGPT is trained to generate text that imitates the style of the input data, Slater proposed that what the system is instructed to achieve is “maximizing similarity,” and that this means, to some extent, “minimizing originality.”

 

Panelists respond to audience questions at the “Generative AI and the Future of Education” symposium.

 

Understanding the social impacts of generative AI systems

While matters of academic integrity, misinformation, and transparency are important, ChatGPT also invites us to consider several questions that are fundamental to the nature of learning. If maximizing similarity is the goal, what does it mean to mass produce texts that are based in preexisting patterns? Will the debut of ChatGPT become an inflection point in education that renders some aspects of curriculum design and assessment obsolete, or is the system merely a change of degree, similar to other writing assistance tools already in use by students?

Many contend that ChatGPT does not perform genuine reasoning. What drives us to differentiate between machine learning and human learning, with the former understood as driven by data correlations and quantitative analysis, while the latter implies situated judgment and qualitative reasoning that is informed by ethics? Asking why this distinction matters can help us better understand the purpose of education, and what it means to assess learning.

The symposium’s discussions made it clear that while generative AI tools have the potential to change many aspects of communication, they will not make human writing obsolete. Instead, it seems clear that the opposite is true. Systems like ChatGPT are designed to seamlessly integrate into existing communication channels where humans must be kept in the loop for the system to be profitable. ChatGPT by itself is not “enough,” and it should never be taken solely as an external tool that can be easily added or uninstalled, nor is it about mere automation. It is contingent upon a society that it, in turn, transforms.

Recursivity is at the core of how ChatGPT operates. It is sensitive to subtle tweaks made to a prompt’s phrasing, and can get better at emulating the user’s style after several rounds. However, on a holistic level, ChatGPT accentuates material that the system scores as more relevant. What is at stake therefore is not just issues with misinformation, but that the effectiveness of its output—whether an answer sounds “right”—can only be validated if it resonates with prevailing ideas, even if these are discriminatory or unjust. The rising use of LLMs therefore virtually guarantees that past prejudices will, at least to a certain degree, be carried forward to the present and future, which poses major concerns for anyone seeking to use these tools in a responsible way.

The social implications of ChatGPT are much more diffused than the “black box” metaphor can signify. In “The Death of the Author,” literary scholar Roland Barthes argued the author has no sovereignty over their own texts: the meaning lies within the text itself, awaiting to be deciphered by the reader. ChatGPT is not a single “author,” and its commercial viability is contingent upon its being embedded in a wide array of communication networks, wherein humans are not only the receiver but also the co-creator (or “adopter” in OpenAI’s language). At its current stage, ChatGPT produces prose that is largely formulaic, downplaying writing as an instrument for content synthetization. However, writing is breathing and alive—it is about deliberation, encounter, and forming relationships with the world—and as we enter a world filled with machine-generated text, we will need to find new ways to actively remember this.

OpenAI launched a subscription version of ChatGPT earlier this year, and its commercial use is likely to continue. When I asked ChatGPT why it was developed in the first place, it answered, “to assist and enhance human communication and knowledge sharing.” But what exactly constitutes success for these goals? How can end users and diverse stakeholder groups, such as educators, be part of the goal-defining process for generative AI systems? How might developers adopt more inclusive design approaches before scaling up the use of AI models? Whether it is possible to deliberately design a “pause” into AI research to adequately consider these factors, and whether we can frame AI’s “learning” goals in a way that gestures toward new possibilities with positive impacts remains an open question to explore.

 

Want to learn more?


Yuxing (Yolanda) Zhang

About the author

Yuxing (Yolanda) Zhang is a PhD candidate in information studies at the Faculty of Information, University of Toronto, and a 2022­–23 Schwartz Reisman Graduate Fellow. Her doctoral dissertation investigates the sociopolitical and epistemic implications of “smart” agri-food production and eco-management. Her works have appeared in Media, Culture & Society, Roadsides, Canadian Journal of Communication, and Cultural Politics (in press).


Browse stories by tag:

Related Posts

 
Previous
Previous

Toronto Public Tech Workshop explores the intersection of new technologies and the public realm

Next
Next

Schwartz Reisman Institute welcomes 2023 fellowship recipients