Learning from machines: Karina Vold on what technology can teach us about being human

 

SRI Faculty Affiliate Karina Vold explores the intersections between philosophy and artificial intelligence, the relationship between humans and their tools, and the social and ethical implications of new technologies like GPT-3. As her research shows, technology has a lot to teach us about what it means to be human, and making sense of new tools sometimes requires—and creates—new concepts and ideas.


What can technology teach us about what it means to be human? How do tools reshape our experience?

These questions are at the heart of Karina Vold’s research, which explores the intersections between philosophy, cognitive science, ethics, and the implications of new technologies like artificial intelligence (AI) and machine learning. An assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology, Vold is a faculty affiliate at the Schwartz Reisman Institute for Technology and Society (SRI) and the recipient of a 2020–21 SRI Faculty Fellowship, as well as a faculty associate at U of T’s Centre for Ethics, and an associate fellow at the University of Cambridge’s Leverhulme Center for the Future of Intelligence. In 2021, Vold was named one of 100 “Brilliant Women in AI Ethics” for her contributions to the field.

In this interview, Vold describes her recent projects, what inspires her research, and why the study of AI needs to integrate philosophy to better understand its impacts on society.

Schwartz Reisman Institute: How did you come to teach at the University of Toronto, and what drew you to the study of technology?

Karina Vold: I actually did my undergraduate degree at U of T studying history and political science. I was interested in the mind, so I started taking courses in philosophy, and that ended up introducing me to the extended mind thesis, which is about how technology affects our minds, or can become integrated into it. I think a lot of people interested in the mind are also interested in AI, because it’s related to the question of whether we can we build a mind, and if we can understand it by building it. I did my PhD at McGill, and then a postdoc at the University of Cambridge’s Leverhulme Center for the Future of Intelligence. From there, I came back to U of T, which was kind of a full circle moment where I could come back to where it all started and start to offer some of the courses I wished they’d had 10 years ago!

SRI: How has the discipline of philosophy been impacted by the advance of new technologies?

Vold: Philosophers have always had something to say about technology—this goes back at least as far as Plato. The new technology in Plato’s time was writing, and it was resisted by some because they weren’t sure what the consequences would be on the human mind, including Socrates. Thankfully, his student disagreed with him and wrote everything down, which is why we know about it.

Lately, I think philosophers have been slower to catch up with new technologies. How much philosophizing has there been about the internet, even though it has completely revolutionized humanity? There has been some thinking about it, of course, but I don’t think it’s been as widespread as it needs to be. There is a long history of the field of AI having an involvement with philosophers, but with other technologies like virtual or augmented reality and all the other new things we’re seeing, like cryptocurrencies and the metaverse—there’s so much more that philosophers could be saying, and the extent to which philosophy isn’t changing or reacting fast enough is to our detriment.

“new technologies introduce new realities and require new concepts to make sense of them—sometimes the existing concepts just aren’t sufficient.”

 

SRI: What inspires your research? Do you develop an idea first and then look for applications, or do you look at technological developments and then think about their implications?

Vold: It’s a little bit of both. In some cases, it’ll be a new technology that’s come out, like GPT-3, and I’ll seek to develop an analysis of what the system is capable of, toning down some of the more hyperbolic or distracting rhetoric that we sometimes see around new technologies. For example, GPT-3 has very interesting and new capacities, but it’s not artificial general intelligence—it’s something else. I want to figure out what that is and what effects it’s having on humans, but also to make sense of it conceptually, because new technologies sometimes introduce new realities and require new concepts to make sense of them—sometimes the existing concepts just aren’t sufficient.

In other cases, it might start with a philosophical idea. For example, I might challenge a claim that’s been made in philosophy about the limits of what’s possible. Technology is constantly challenging what we think is possible. Sometimes it can be challenging to make sense of new technological developments, which are written by scientists, and I think that’s why there can be a lack of conversation at times, because different languages and skill sets are required.

SRI: As a philosopher, what interests you most about AI?

Vold: I think the most interesting thing about AI is what it can tell us about the human mind—how it works, why it works. Trying to build something like our minds gives us an opportunity to reflect on what that tells us about who we are. But, of course, there are also huge implications around building technologies like AI or machine learning, which are not autonomous agents, and don’t have anything like intentionality or consciousness. Figuring out what these systems can do for society and for humans is a big part of my research.

In both cases, it’s trying to figure out what we can say about the mind, and then also trying to figure out how we should be using these technologies. What should we be building? How are they affecting different people? How are they embedding our biases? Whether you’re thinking about AI as an autonomous system or a non-autonomous system, there are rich ethical questions in both cases.

SRI: You mentioned the extended mind thesis—can you explain that concept further, and describe how it might help us better understand our relationship with technology?

Vold: The extended mind thesis pushes back against a standard view of the mind, which maintains that mind is entirely instantiated by the brain. Nowadays, the mind is widely understood as the software that runs on the hardware of the brain, to use a computer analogy. But what if the hardware isn’t just the brain? What if there’s more hardware that’s relevant? The extended mind thesis argues that we use tools and symbols outside of our brain in an essential way, in a way that makes them on par with our brains with respect to how they constitute (or instantiate) the mind. So, the mind is not just the brain, the mind is more than the brain, at least in some cases.

What this means is that there are all sorts of technologies we use, in different ways for different people, that become integrated into our cognitive systems. And as technology is getting smarter and smarter, we’re starting to see changes in what we’re capable of, cognitively speaking. Back when Socrates was around, there were only pens and papers—which was still a pretty powerful technology. But now we have smartphones, computers, GPS systems, Fitbits, Alexa. What do all these new technologies mean for our minds? What do they say about the limits of our minds? Are these tools meddling with our thoughts—invading what was previously a private, inner realm? Are they manipulating our thoughts or desires, with intrusive personalized ads? On the other hand, in some cases, are they really helping us? For example, we no longer need to remember directions, phone numbers, shopping lists—everything is easily stored for you. It’s freeing up a lot of cognitive space. Working through these cognitive implications of new technologies is a big part of my research.

One implication of the extended mind thesis is that it highlights how, to understand mental disorders, we might have to look beyond the brain. There are other aspects of our minds which might be partially responsible, and able to provide corrections. One compelling example is people with Alzheimer’s disease: they’re often tested to see how good their memory is, and based on that, they’re evaluated for whether they have the condition. But the philosopher Andy Clark has written about cases where patients have scored abysmally on these tests, and their doctors have been puzzled about how they’ve been capable of getting around inner-city environments, and yet scoring so poorly. The doctors eventually made at-home visits and saw that these patients had completely changed their homes by putting notes on everything and taking the doors off their cabinets to aid their cognition. And if you removed that patient from their environment and put them into a long-term care facility—which is what the tests might have suggested—you’d be removing them from all those cognitive aids. So we really don’t have a good picture of what’s going on with someone, cognitively speaking, unless we look at their environment to see what tools they’re using and how they’re using them. This approach also gives a lens into how we might apply these tactics to people who haven’t thought of it themselves: maybe there are ways of supporting cognitive deficiencies through external treatments, aiding them in a way that’s less invasive than going in and fixing the brain.

On the other hand, one of the risks around using technology as a form of cognitive scaffolding is that some of these tools are vulnerable to third-party interference. So that becomes risky for people who aren’t thoughtful about where they’re putting their thoughts, or don’t have a metacognitive awareness of how their tools are involved in their cognition. Third-party interference could come from somebody you know, or perhaps even from the business that’s created the tool and is able to manipulate it. These are some of the new kinds of challenges that come with new AI-driven digital technologies that we’re exploring.

“There are so many mysteries around the human mind—how it works, why it works. The most interesting thing for me is trying to build something like our minds, and to reflect on what that tells us about who we are.”

 

SRI: What is the biggest limitation about how we currently think about AI?

Vold: Most definitions of AI tend to contain some clause about autonomy—that the system should be autonomous and not be dependent on input from a human user. That is often considered what an AI system should be, and I think it misleads us in our analysis. For the most part, these systems are not autonomous—they are non-autonomous tools, much like the laptops we have, that depend on human input and frequent interventions, and that’s true of most applications we see in AI. Occasionally, you’ll see the Boston Dynamics robots pop up on your feed, but that’s really an outlier case, and a lot of them are still plugged in. I think that a big part of what we’re missing out on is the idea that these systems, for the most part, aren’t going to be autonomous—they’re going to be non-autonomous or semi-autonomous systems that do rely on our input. And that doesn’t mean they’re not interesting! What it does mean is that we can’t evaluate their accomplishments or skills entirely on their own. We need to evaluate them as a coupled system of user and system together to understand what the AI system is capable of. And the human user can obviously change depending on circumstance, which makes evaluation much more challenging. I think, for that reason, sometimes those types of evaluations are resisted, but in reality it’s those kinds of evaluations—of the entire coupled system, involving both the system and the user—that we need most to measure progress in AI. So, in a sense, our definitions of AI are missing out the human.

SRI: Based on that, it would seem AI requires an interdisciplinary approach, because technical knowledge is necessary, but so is an understanding of greater social and cultural contexts. Can you talk about the role of interdisciplinarity in your research?

Vold: Yes, absolutely. Coupled systems need interdisciplinary studies to be understood. It’s not just enough to have the people building the system evaluating it, both for the human user and society in general. As I see it, the computer scientist is the person building the system, but not necessarily the person best equipped to do the rest of the analysis. These systems are also often being deployed before we fully understand what their implications are going to be for society, which is all the more reason why we need a whole gambit of social scientists and humanities scholars to engage with them. We also sometimes forget that the computer scientist herself is also embedded in society, reading the literature and science fiction around technologies, for example, and that those ideas are often reflected in her creation process. So that’s another reason why we need interdisciplinary perspectives to make sense of the creator, the creation, and the implications of all of that for humanity today.

One article I co-authored recently with several other U of T scholars was about explainable AI in medicine, and it was helpful to have my co-authors in medicine describe how they use AI, and work through concrete examples of how theories might translate in practice. Often, what happens in AI ethics is that we come up with high-level principles, and then run into issues with how to translate these principles into practical guidance. Different fields have different values, and they need to make tradeoffs. Those tradeoffs happen sometimes in the way the software is being built—for example, you might have to trade between privacy and accuracy, because if you want privacy you need to feed the system less data. Alternatively, if you want the system to be highly accurate you give it a lot of data, but if you want it to be explainable, more data may make explainability more challenging. As a philosopher, it’s helpful to talk to people in different contexts—whether it’s lawyers, app designers, or healthcare researchers—and figure out their needs, the issues on the ground, and what it means in terms of how we make these value tradeoffs.

“For the most part, these systems are not autonomous… in a sense, our definitions of AI are missing out the human.”

 

SRI: What developments or challenges do you see on the horizon for philosophy and AI?

Vold: I’m excited for the wealth of scholars that have joined the field of AI ethics, and for the results we’re going to see over the next few years from so many of my colleagues, who are working on cutting-edge issues. I can already see papers coming out and the conversation moving at a faster pace than it has in the past. I’m also excited to see how philosophers might become more integrated in the creation process of new technologies, and more engaged in policymaking. In general, there’s been an awakening in technology ethics as a result of the recent advancements in AI, so it’s exciting to see what we can do with the resources.

 

Vold’s current research project explores how AI can teach humans new methods for problem solving. Among her inspirations is AlphaGo’s 2016 victory in Go over human champion Lee Sedol.

 

One thing in particular I’m interested in is an area that I’m calling not quite “machine teaching,” but “learning from machines.” I have a new project funded by SSHRC that explores how we can make epistemic and cognitive advances by relying on new technologies. For example, there was a recent famous case of AlphaGo beating the human champion Lee Sedol at the game of Go. They had a five-match series, and in the fourth game AlphaGo made what’s been called "Move 37," which was an unexpected move that people found very surprising. It was a move that a good human player would not have made, but it ended up being pivotal and arguably winning the game for AlphaGo.

What was exciting about that move was that AlphaGo did something a human player wouldn’t do—it broke with some of the norms that humans have developed to learn the game. Usually, humans learn Go by internalizing certain general rules of thumb, such as “the second line is the route to defeat.” Those rules can be really helpful, but they also become embedded in a way that is fixed, so we can’t see past them. AlphaGo learned the game on its own without using these norms that humans use, and it ended up coming up with completely new strategies.

Now, humans are going back and training on those strategies developed by AlphaGo and learning from them. It’s a way in which humans are using AI to break past some of the conventions that guide our thinking, in ways that can both aid and constrain problem-solving. We’re also seeing these kinds of discoveries in science, like the protein folding discoveries of AlphaFold, which was also a way in which the machine broke past human thinking. So this is one development at the intersection of philosophy and AI that I’m interested in exploring further—cases where we can gain new insights and new perspectives on problem solving, and perhaps even new ways of looking at old problems, with AI systems.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Inaugural SRI Faculty Fellows build bridges between disciplines and forge new areas of research

Next
Next

How algorithms can strengthen democracy: Ariel Procaccia on designing citizens’ assemblies