Why we shouldn’t “move fast and break things”: Shion Guha on the benefits of human-centered data science
Schwartz Reisman Institute (SRI): You joined the Schwartz Reisman Institute as a Faculty Affiliate in 2021, and are a co-author of Human-Centered Data Science: An Introduction, alongside Cecilia Aragon (University of Washington), Michael Muller (IBM Research), Gina Neff (Minderoo Center for Technology and Democracy, University of Cambridge), and Marina Kogan (University of Utah), which will be published by MIT Press in March 2022. Can you tell us about your background?
Shion Guha: My academic background is primarily in statistics and machine learning. I graduated with my PhD from Cornell in 2016, and then was an assistant professor at Marquette University for five years before joining the Faculty of Information last year. U of T is one of the first universities in the world to launch an academic program in human-centered data science, so I was nudged to apply.
My co-authors on the book and I are some of the first people to have talked about the concept of human-centered data science, in a workshop at one of our main conferences in 2016. We decided to write a textbook about the field because we felt there was a missing link between what is taught in the classroom and what happens in practice. In the last few years, the field has talked a lot about algorithmic biases and unforeseen consequences of technology on society. And so, we decided that instead of writing an academic monograph, we wanted to write a practical textbook for students.
SRI: What does it mean for data science to be “human-centered,” and how does this approach differ from other methodologies?
Guha: The main idea is to incorporate human-centered design practices into data science—to develop human-centered algorithms. Human-centered design is not a new thing, it’s something that has been talked about a lot in the fields of design, human-computer interaction, and so on. But those fields have always been a little divorced from AI, machine learning and data science. Now, with the advent of this tremendous growth in data science jobs came all of these criticisms around algorithmic bias, which raises the question of whether we are training students properly. Are we teaching them to be cognizant of potential critical issues down the line? Are we teaching them how to examine a system critically? Most computer scientists tend to adopt a very positivist approach. But the fact is that we need multiple approaches, and human-centered data science encourages these practices. Right now, a lot of data science is very model-centered—the conversation is always around what model can most accurately predict something. Instead, the conversation should be, “What can we do so that people have the best outcomes?” It’s a slightly different conversation; the values are different.
Human-centered data science starts off by developing a critical understanding of the socio-technical system under investigation. So, whether it’s Facebook developing a new recommendation system, or the federal government trying to decide on facial recognition policy, understanding the system critically is often the first step. And we’ve actually failed a generation of computer science and statistics students because we never trained them about any of this. I believe in worlds where data-driven decision-making has positive outcomes, but I don't believe in a world where we do this uncritically. I don't believe in a world where you just throw stuff at the wall and see what sticks, because that hasn’t worked out at all.
Next, we engage in a human-centered design process, which can be understood through three different lenses. First, there's theoretical design: the model should be drawn from existing theory—what do we know about how people are interacting in a system. For instance, a lot of my work is centered around how algorithms are used to make decisions in child welfare. So, I need to ensure whatever algorithm I develop draws from the best theories about social work and child welfare. Second, there's something called participatory design, which means inviting all the stakeholders into the process to let them interpret the model. I might not know everything about child welfare, but my models are interpreted by specialists in that area. Participatory design ensures that the people who are affected by the system make the decisions about its interpretation and design. The third process is called speculative design, which is about thinking outside the box. Let's think about a world where this model doesn't exist, but something else exists. How do we align this model with that world? One of the best ways to describe speculative approaches is the series Black Mirror, which depicts technologies and systems that could happen.
Human-centered design practices are about taking these three aspects and incorporating them in the design of algorithms. But we don't stop there, because you can’t just put something into society without extensive testing, you need to do longitudinal field evaluation. And I’m not talking about six-week evaluations, which are common—I’m talking about six months to a year before putting something into practice. So, all of this is a more critical and slowed-down design process.
“Right now, a lot of data science is very model-centered—the conversation is always around what model can most accurately predict something. Instead, the conversation should be, “What can we do so that people have the best outcomes?” It’s a slightly different conversation; the values are different.”
SRI: Interdisciplinarity is a key component to developing a human-centered data science approach. What helps you to collaborate successfully with researchers in other disciplines?
Guha: I think one of the major impediments to collaboration between disciplines, or even sub-disciplines, are the different values people have. For instance, in my work in child welfare, the government has a set of values—to optimize between spending money and ensuring kids have positive outcomes—while the people who work in the system have different values—they want each child to have a positive outcome. When I come in as the data scientist, I’m trying to make sure the model I build reconciles these values.
My success story has been in working with child welfare services in Wisconsin. When they came to us, I cautioned them that we needed to engage with each other through ongoing conversations to make something successful. We had many stakeholders: researchers in child welfare, department heads, and street-level case workers. I brought them together many times to figure out how to reconcile their values, and that was one of the hardest things that I ever did, because people talk about their objectives, but don't often talk about their values. It's a hard thing to say, okay, this is what I really believe how the system should work.
We conducted workshops for about a year to understand what they needed, and what we eventually realized was that they were not interested in building an algorithm that predicted risk-based probabilities, they were interested in something else: how to make sense of narratives, such as how to describe the story of a child in the system. If a new child comes into the system, how can we look back and consider how this child displays the same features as other historical case studies? What positive outcomes can we draw upon to ensure this new child gets the services they need? It's a very different and holistic process—it’s not a number, it's not a classification model. If I had just been given some data, I would have developed a risk-based system that would have ultimately yielded poor outcomes. But because we engaged in that difficult community building process, we figured out that what they really wanted was not what they told me they wanted. And this was because of a value mismatch.
Similarly, when I go to machine learning conferences, there’s a different kind of value mismatch. People are more interested in discussing the theoretical underpinnings of models. I am interested in that, but I’m also interested in telling the story of child welfare, I’m interested in pushing that boundary. But a lot of my colleagues are not interested in that—their part of academia values optimizing quantitative models, which is fine, but then you can't claim you're doing all these big things for society if that's really what your values are.
“the worst slogan that I’ve ever heard in the technology sector is ‘move fast and break things.’ you don't want to do that if you've got the lives of people on the line. You can't do that.”
SRI: It's interesting to note how much initial effort is required, involving a lot of development that many wouldn't necessarily consider as part of system design.
Guha: You know, the worst slogan that I’ve ever heard in the technology sector, even though people seem to really like it for some reason, is “move fast and break things.” Maybe for product recommendations that's fine, but you don't want to do that if you've got the lives of people on the line. You can't do that. I really think we need to slow down and be critical about these things. That doesn't mean that we don't build data-driven models—it means that we do them thoughtfully, and we recognize the various risks and potential issues down the line, and how to deal with it. Not everything can be dealt with quantitatively.
Issues around algorithmic fairness have become very popular and are the hottest field of machine learning right now. The problem is that we look at this from a very positivist, quantitative perspective, by seeking to make algorithms that are mathematically fair, so different minority groups do not have disproportionate outcomes. Well, you can prove a theorem saying that and put it into practice, but here’s the problem: models are not used in isolation. If you take that model and put it where people are biased, when biased people interact with unbiased, mathematically fair algorithms it will make the algorithms also biased. Human-AI interaction is really important—we can't pretend our systems are used in isolation. Most problems happen because the algorithmic decision-making process itself is poorly understood, and how people make a particular decision from the output of an AI system is something we don't yet understand well. This creates a lot of issues, yet the field of machine learning doesn't value that. The field values mathematical solutions, except it's a solution only if you view it in the context of a reductionist framework. It has nothing to do with reality.
SRI: What are some of the challenges around the use of algorithmic decision-making?
Guha: My co-authors and I identify three key dimensions of algorithmic decision-making. One dimension is that decisions are mediated by the specific bureaucratic laws, policies, and regulations that are inherent to that system. So, there are certain things you can do, and can’t do, that are mandated by law. The second dimension is very important, we call it human discretion. For example, police may see a minor offense like jaywalking but choose to selectively ignore it because they are focused on more significant crimes. So, while the law itself is rigid, inside the confines of the law there is discretion. The same thing happens with algorithmically mediated systems, where an algorithm gives an output, but a person might choose to ignore it. A case worker might know more about a factor that the algorithm failed to pick up on. This works the other way too, where a person might be unsure and go along with an algorithmic decision because they trust the system. So, there’s a spectrum of discretion. The third aspect is algorithmic literacy. How do people make decisions from numbers? Every system gives a separate visualization or output, and an average social worker on the ground might not have the training to interpret that data. What kinds of training are we going to give people who will implement these decisions?
Now, when we take these three components together, these are the main dimensions of how people make decisions from algorithms. Our group was the first group to unpack this in the case of public services, and it has major implications for AI systems going forward. For instance, how you set up the system affects what kinds of opportunities the user has for exercising discretion. Can everyone override it? Can supervisors override it? How do we look at agreements and disagreements and keep a record of that? If I have a lot of experience and think that the algorithm’s decision is wrong, I might disagree—however, I might also be afraid that if I don't agree, my supervisor will punish me.
Studying the algorithmic decision-making process has been crucial for us in setting up the next series of problems and research questions. One of the things that I’m very interested in is changes in policy—for example, my work in Wisconsin was utilized to make changes that had positive outcomes. But a critical drawback is that I haven't engaged with legal scholars or the family court system. One of the things I like about SRI is it that brings together legal scholars and data scientists, and I’m interested in collaborating with legal scholars to think about how to write AI legislation that will affect algorithmic decision-making processes. I think it demands a radical rethinking of how laws are drafted. I don't think we can engage in the same process anymore; we need to think beyond that and engage in some speculative design.
“One of the things I like about SRI is it that brings together legal scholars and data scientists, and I’m interested in collaborating with legal scholars to think about how to write AI legislation… I don't think we can engage in the same process anymore; we need to think beyond that and engage in some speculative design.”
SRI: What is the most important thing that people need to know about data science today, and what are the challenges that lie ahead for the discipline?
Guha: Obviously, I’m very invested in human-centered data science. I really think this process works well, and since U of T began its program, the field has expanded to other universities and is gaining momentum. I really want to bring this to the education of our professional data science students—those who are going to immediately go out into industry and start applying these principles.
Broadly, the challenges for the discipline are the problems I've alluded to, and human-centered data science responds to these issues. We should not be moving fast, we should not be breaking things—not when it comes to making decisions about people. It doesn't have to be high stakes, like child welfare. You can imagine something like Facebook or Twitter algorithms where ostensibly you're doing recommendation systems, but that really has ramifications for democracy. There are lots of small things that have major unintended consequences down the line, even something like algorithms in the classroom to predict whether a child is doing well or not.
The other main challenge is this value mismatch problem I described. We need to teach our next generation of students to be more compassionate, to encourage them to think from other perspectives, and to center other people's values and opinions without centering their own. So how do we get better? Again, human-centered design has worked very well in other areas, and we can learn what worked well and apply it here. Why should we pretend that we have nothing to learn from other areas?
Photo credits: Maquette University; Faculty of Information, University of Toronto.