Mimetic models: Ethical implications of AI that acts like you

 
A DALL·E 2 image based on the prompt “Print by Andy Warhol of a robot learning chess.” Source: Reid McIlroy-Young.

In a new paper presented at the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Reid McIlroy-Young explores the concept of “mimetic models”: algorithms designed to simulate the behaviour of an individual in new situations. In this article, McIlroy-Young reflects on the ethical implications of such models, and in what scenarios they might be used.

Image: A DALL·E 2 image based on the prompt “Print by Andy Warhol of a robot learning chess.” Source: Reid McIlroy-Young.


Machine learning (ML) systems that display human-like behaviour are becoming more common and widely used every day. Systems can now respond to questions using conversational language, create a complex image from a short text description, and even play games like a human.

These new ML systems often aim for “human-like” performance, meaning that they respond in ways that a theoretically “average” human might. However, the mimicking of individuals by emulating the actions that a particular person would take in a given situation is also now increasingly possible through the capacities of these systems. This means that advancements in ML will result in the potential to successfully impersonate real individuals—a scenario which raises new and profound ethical issues.

In an recent session of the AAAI/ACM Conference on AI, Ethics, and Society (AIES), I presented a new research paper co-authored with Jon Kleinberg (Cornell), Siddhartha Sen (Microsoft), Solon Barocas (Microsoft, Cornell), and Ashton Anderson (University of Toronto) that explores “mimetic models”: algorithms that are trained on data from a specific individual, and which is designed to accurately predict and simulate the decisions and behavior this individual in new situations. What are the ethical implications of mimetic models, and in what scenarios might they be used? How might these models augment simulations posed by other AI technologies such as deepfakes and voice cloning?

Read the paper: “Mimetic Models: Ethical Implications of AI that Acts Like You”

 

SRI Graduate Fellow Reid McIlroy-Young and Research Lead Ashton Anderson co-authored a new paper that explores the ethical implications of AI models that can impersonate individual behaviour.

 

How do mimetic models relate to other predictive AI techniques?

Mimetic models are a set of ML techniques that use AI to simulate the behaviour of specific people. While several other types of predictive technologies utilize AI, such as “deepfakes,” mimetic models have different properties and potential applications that makes them unique.

While deepfakes are able to imitate the appearance of specific people, mimetic models imitate an individual’s behaviour. This distinction is significant: while mimetic models are concerned with patterns in behaviour only and not appearance, there is a strong possibility that a future AI system could employ a deepfake controlled by a mimetic model to create a more convincing simulation of an individual.

Mimetic models also have similarities with recommendation systems, with the main difference being that mimetic take actions, not just suggest potential courses of action. Additionally, recommendation systems are often designed to expand the horizons of users—by not merely predicting exactly what they would most like but also nudging them to explore new or related genres, topics, or ideas—while mimetic models are intended to accurately reflect the preferences of a given subject.

Contemporary examples of mimetic models

Our interest in mimetic models arose from earlier research that explored modeling individual players in chess. We created deep learning models to predict what move a specific player would take in a game of chess for a given board. However, as we prepared to publicize the code and models involved in this research, we realized there was no existing guidance for how to handle these new mimetic models—in fact, there wasn't even an established term in the literature.

 

Diagram of the Maia Chess platform deep learning framework. During training, Maia is given a position that occurred in a human game and tries to predict which move was made. After seeing hundreds of millions of positions, Maia accurately captures how people at different levels play chess. Source: Reid McIlroy-Young.

 

While releasing a model of an individual chess player may not appear to generate ethical issues—and chess being a “low stakes” environment is one of the compelling reasons to use it for AI research—there is still a possibility for harm. For instance, if a mimetic model designed on a particular chess player became popular, the player might find that their opponents would be able to beat them more easily, thanks to practicing against a simulated copy. Mimetic models are also impossible to fully anonymize, so the player may be unwillingly linked to the research project.

Another example of a mimetic model can be found in a recent experiment to create an AI chatbot of the philosopher Daniel Dennett, in which a deep learning model utilizing GPT-3 was created (with Dennett’s permission) with the ability to answer questions like him. When asked to pick answers by the real Dennett from four given by this model, survey participants were only slightly better than random at selecting the correct response, guessing correctly only 1.2 times on average out of 5, while experts in Dennett’s work were only able to pick out the correct answers half the time.

This model was created with Dennett’s publicly available writings, meaning the same techniques could be applied to innumerable other people. Beyond the use in deception, the creation of this type of language-based mimetic model raises concerns about informed consent, as many people would not like to be mimicked in this way. Such models also present opportunities for manipulation, for if I can create an accurate mimetic model of someone’s responses in a conversation, it will allow me to practice on the model many times to the point that I might be able to control the conversation, avoid topics without them realizing it, or even convince them of things they would not normally believe. This type of modeling would potentially be very useful in preparing for a job interview, or even a date.

Mimetic model scenarios and ethical implications

While there are already systems for creating mimetic models today, we can also look at the current state of machine learning research and predict what could be possible in the next couple years.

Consider, as an example, a tutoring company that requires their tutors to have mimetic models made based on their email exchanges. Thus, when a tutor is unavailable, they can still “respond” to emails immediately. This usage presents multiple concerning factors. First, it could devalue workers—by having them partially replaced with an automated system, their overall value is lowered to the company. Secondly, the use of a mimetic model in this instance could mislead customers’ awareness, who may expect the level of consideration that comes with a 1:1 human relationship.

There are also more subtle concerns around this example regarding issues of fidelity, which arise if the model is unable to perfectly mimic the human’s behaviour. There are two possible directions this might manifest: if the mimetic model performs worse than the tutor, it may result in reputational damage. Consider a situation where a client has built a rapport with a tutor through regular communications, but then, suddenly, receives a response with a very different tone because the model was not updated. This would not reflect well on the tutor or the company. Alternatively, if the model was able to communicate better than the tutor, this would also impose difficulties, as sometimes a customer may get long and personalized responses, while at other times they may get short responses. The tutor would also then increasingly be judged against a model of themselves.

Additional concerns around reputational damage, bias, and consent may also arise in instances where mimetic models are created for historical or deceased figures. If a model designed to produce new works by Beethoven created music that was sub-par, it may still be associated with the composer even though it was not authored by him. Biases may also be exacerbated if models designed on historical figures are created with limited or skewed data. Finally, obtaining informed consent for models designed on a deceased person is impossible. There is also a chance that mimetic models could be designed for living persons without their consent, using only publicly available information.

While our research is a first step into understanding the ethical implications of mimetic models by asking questions and starting to build a framework for understanding them, much more work needs to be done. Mimetic models could potentially be used to help with many tasks, and by better understanding their potential negative implications, we can improve the likelihood that their future uses will be beneficial.

Want to learn more?


Reid McIlroy-Young

About the author

Reid McIlroy-Young is a PhD candidate at the University of Toronto at the Computational Social Science Lab, and a 2021–22 Schwartz Reisman Graduate Fellow. His research explores how to make machine learning systems that can teach humans chess, the work for which resulted in the creation of the Maia Chess platform, which uses a neural network to simulate human-like play.


Browse stories by tag:

Related Posts

 
Previous
Previous

Schwartz Reisman Institute for Technology and Society announces Women in AI series

Next
Next

Schwartz Reisman Institute for Technology and Society appoints new research leads