Power and prediction: Avi Goldfarb on the disruptive economics of artificial intelligence

 

In his new book, Power and Prediction: The Disruptive Economics of Artificial Intelligence, SRI Research Lead Avi Goldfarb argues we live in the “Between Times”: after discovering the potential of AI, but before its widespread adoption. As Goldfarb explains, the evolution of AI innovation will require systems-level changes to the ways that organizations make decisions.


Artificial intelligence (AI) has the potential to transform how we live, work, and organize society. But, a decade into the AI revolution that began with breakthroughs in the field of deep learning, many sectors remain unchanged. What happened?

Delays in implementation are an essential part of any technology with the power to truly reshape society, says Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare at the University of Toronto, and a research lead at the Schwartz Reisman Institute for Technology and Society (SRI). “If this technology is as exciting as electricity, the Internet, computing, and the steam engine, it will take time,” he contends, “because a lot of complimentary innovation has to happen as well.”

Goldfarb makes the case for how AI innovation will evolve in a new book co-authored with Ajay Agrawal and Joshua Gans, Power and Prediction: The Disruptive Economics of Artificial Intelligence (Harvard Business Review Press, 2022), a sequel to their widely-acclaimed Prediction Machines: The Simple Economics of Artificial Intelligence (2018). The trio are also co-founders of Creative Destruction Lab, a non-profit organization that helps science- and technology-based start-ups scale.

Power and Prediction is a lucid, informative, and exciting read. In explaining AI’s potential to transform decision-making and the system-level changes this requires, Goldfarb and his co-authors point to new approaches and barriers that will need to be overcome. The book’s insights speak to many pressing issues, including the future of public health, inequality, and climate change, and envisioning a world where new technologies benefit everyone, rather than a select few.

So, what will it take for us to move past the “Between Times”—the term Goldfarb and his co-authors use to describe the present, in which AI innovation has been unlocked but not yet optimized—and shift beyond “point solutions” focused on specific tasks to “system solutions” that generate transformational change? We sat down with Goldfarb to discuss these questions and more.

The following interview has been condensed for length and clarity.

 
Avi Goldfarb

Artificial intelligence is as exciting a technology as electricity and computing, but it will take time to see its effects, says SRI Research Lead Avi Goldfarb, Rotman Chair of Artificial Intelligence and Healthcare at the University of Toronto.

 

Schwartz Reisman Institute: What changed in your understanding of the landscape of AI innovation since Prediction Machines?

Avi Goldfarb: We wrote Prediction Machines thinking that a revolution was about to happen, and we saw that revolution happening at a handful of companies like Google, Amazon, and others. But when it came to most businesses we interacted with, by 2021 we started to feel a sense of disappointment. Yes, there was all this potential, but it hadn’t affected their bottom line yet—the uses that they’d found had been incremental, rather than transformational. And that got us trying to understand what went wrong. One potential thing that could have gone wrong, of course, was that AI wasn’t as exciting as we thought. Another was that the technology was potentially as big a deal as the major revolutions of the past 200 years—innovations like steam, electricity, computing—and the issue was system-level implementation. For every major technological innovation, it took a long time to figure out how to make that change affect society at scale.

As we discuss in the book, the first applications of electricity were point solutions. Factories run by steam engines were structured around the power source, so that the most power-hungry machines were closest to the engine and always on. When electric motors were invented, they replaced steam due to being cheaper, but factories still did everything else the same at first. However, over time, electricity allowed factories to do things differently: to organize around the production process as opposed to the power source, and to have huge indoor spaces lit by electric light. To build that kind of space, factories left city centers and went out to the suburbs and rural areas. That led to a change in the way we work, which in turn led to a change in the way we live. The process took about 40 years: it was clear in the 1880s electricity was going to be a big deal, but it wasn't until the 1920s that the median household and the median factory were electrified.

So, jump forward to the 1960s, and we had this thing called computing. It was clear to anybody paying attention that computers were going to be big. However, if you looked at the data in the 1960s, 70s, and 80s, companies were adopting computers, but this wasn’t actually affecting productivity and profits in a noticeable way. That reorganization really happened in the 1980s, and it was clear by the 1990s what computing could do.

Now jump forward to 2012. There’s a team from the University of Toronto, led by Geoffrey Hinton and others, who win the ImageNet competition by showing that deep learning can recognize an image much better than other technologies available. In many ways, that was the beginning of the current excitement around AI. We saw that machines can see, and a number of researchers who used that metaphor noted that vision was the preface to the Cambrian explosion in life, and that machine vision should enable a similar transformation. It seemed very exciting. In the fall of 2016, Rotman hosted the Market for Intelligence conference, where Hinton said we should stop training radiologists now, because it was completely obvious that within five years the machines were going to be better than humans. Well, we’re over five years past that now, and it hasn't happened yet.

And so, there’s this tension between 2012 and 2018: we saw the vision of what AI could do increasing, yet, by 2021, for most companies and industries that hadn’t happened yet. Many companies have found point solutions and applications, but industries, for the most part, haven't been transformed.

The core idea of Power and Prediction is that AI is an exciting technology—as exciting as electricity, the Internet, computing, and the steam engine—but it’s going to take time to see its effects, because a lot of complimentary innovation has to happen as well. Now, some might respond that’s not very helpful, because we don’t want to wait! And part of our agenda in the book is to accelerate the timeline of this innovation from 40 years to ten, or even less. To get there, we then need to think through: what is this innovation going to look like? We can’t just say it’s going to take time—that’s not constructive.

“In many cases, prediction will so change how decisions are made that the entire system of decision-making and its processes in organizations will need to adjust. Only then will AI adoption really take off.” — Power and Prediction


SRI: What sort of changes are needed for organizations to harness AI’s full potential?

Goldfarb: Here, we lean on three key ideas. The first idea, from Prediction Machines, is that AI today is not artificial general intelligence (AGI)—it’s prediction technology. The second is that a prediction is useful because it helps you make decisions. A prediction without a decision is useless. So, what AI really does, what prediction machines do, is they allow you to unbundle the prediction from the rest of the decision, and that can lead to all sorts of transformation. Finally, the third key idea is that decisions don’t happen in isolation.

In the book, we talk about the challenges of this unbundling in the context of the water crisis in Flint, Michigan. There were two professors at the University of Michigan who built an AI tool that could predict which houses in Flint had lead pipes with an accuracy of 80%. Why is that a big deal? Well, the only way to know if a house has lead pipes is to dig up the pipes, and it can be very expensive to dig up hundreds of thousands of houses. If you can identify the affected pipes, then you can save people from drinking poisoned water much earlier, and, in the process, you can save hundreds of thousands, or even millions, of dollars.

So, the city started adopting this AI system, and dug in the neighborhoods where it predicted lead pipes, and they were getting them 80% right. It was a feel good story. But then people started complaining, “How come my friend’s street is getting their pipes dug up, but mine isn’t?” And many politicians noticed that the people who voted for them, or their district entirely, weren’t getting their pipes dug up. And so, the politicians overruled the contractors, and decided not to use the AI’s predictions in favour of a “systematic” solution—which, if you look at the map, meant digging pipes in the places where the people who complain to the politicians live—and the success of the project suddenly dropped to 20%.

If the story ended there, it would be a story of a failed point solution. The prediction machine had taken away the politicians’ power because it separated the prediction for the rest of the decision, and the politicians decided to ignore what was clearly a better decision and do something in their own interest. But people sued, and a judge decided that the city had to listen to the prediction machine—the unbundling of the prediction from the judgment made it clear the politicians weren’t making the right call. The judge overruled the politicians, and, as a consequence, thousands of residents in Flint were given safe water faster.

That unbundling of the prediction from the rest of the decision is a key part of the potential of AI. We often hear the term “machine decisions.” There’s no such thing as a machine decision. What prediction machines do is allow you to change who makes decisions, either up or down the chain of an organization, and when those decisions are made. There are all sorts of examples of what seems like an automated decision, but what it actually does is it takes some human’s decision—typically at headquarters—and scales it.

Power and Prediction cover

Power and Prediction is one of Forbes’ best business books of 2022.

Additionally, for organizations to succeed, they require a whole bunch of people working in concert. It’s not about one decision, it’s about decisions working together. One of the challenges with a prediction machine is that if you’re going to make a different decision every time based on the information, everyone else in your organization needs to coordinate with that in order to take advantage of what the predictions are offering.

For example, in insurance, you can make better predictions as to whether or not somebody’s house is going to burn down or if they’ll have a leaky pipe. Right now, that’s not going to make much difference to the company—it will change the price, but it’s not going to be transformative to the way the company operates. In contrast, if you know that a potential customer is likely to have leaky pipes, you can imagine a different kind of insurance company which takes that prediction, and offers to help—by reducing the risk through some kind of monitor, or even fixing the issue, if the risk is high enough—in exchange for a lower price.

Another example is healthcare: at the emergency department there is somebody on triage, who gives a prediction about the severity of what’s going on. They might send a patient immediately for tests, or ask them to wait. Right now, AIs are used in triage at SickKids in Toronto and other hospitals, and they are making it more effective. But, to really take advantage of the prediction, they need to coordinate with the next step. If triage is sending people for a particular test more frequently, then there need to be other decisions made about staffing for those tests, and where to offer them. And, if your predictions are good enough, there’s an even different decision to be made, which is maybe you don’t even need the tests. If your prediction that somebody’s having a heart attack is good enough, you don’t need to send them for that extra test and waste that time or money. Instead, you’ll send them direct to treatment, and that requires coordination between what’s happening upstream on the triage side, and what’s happening downstream in terms of the testing or treatment side.

“data and judgment are complements to prediction. As we increase predictions, we’re going to need more human judgment and more data.”

SRI: How do you see AI technologies impacting the future of work?

Goldfarb: In Prediction Machines, we pointed out that data and judgment are complements to prediction. As we increase predictions, we’re going to need more human judgment and more data, which may or may not require more or less jobs. Machines don't decide, you always need a human. So that’s an optimistic: we still have humans in control. However, that does not mean we’ll need as many humans in control. We could see real power shifts, in both directions. The power shift towards centralization is to have lots of on-the-ground managers deciding on what people should do in day-to-day work, whether to retain or promote workers, and all that. When we centralize that, somebody at headquarters decides all those things. So yes, it’s still a human deciding, but maybe we can have one human deciding at scale instead of hundreds of thousands of humans.

The optimistic flip side is that sometimes prediction is an important bottleneck in the ability for people to work. If you think about the rise of car services like Uber and Lyft, dispatch was a real bottleneck—it’s a search problem of having drivers finding riders and riders finding drivers. Digitization in general, and some prediction technologies underlying it, allowed more people to work as drivers. For the most part, that led away from a centralization of power into more jobs, not less. You can debate whether taxi drivers have more power than the drivers or executives at Uber, but in terms of a count of people working in that area, it’s definitely more as a consequence of these changes.

SRI: Will certain sectors have greater ease in adopting system-level changes than others? What criteria might determine this?

Goldfarb: There is a real opportunity here for start-ups, because when building a new system from scratch it’s often easier to start with nothing. You don’t have to convince people to come along with your changes, so it becomes a less political process, at least within your organization. If you’re trying to change a huge established company or organization, it’s going to be harder.

I’m very excited about the potential for AI and healthcare, but healthcare is complicated; there are so many different decision makers. There are the patients, the payers—sometimes government, sometimes insurance companies, sometimes a combination of the above—and then there are doctors, who have certain interests, medical administrators who might have different interests, and nurses.

AI has potential to supercharge nurses, because a key distinction between a doctor and a nurse in terms of training is diagnosis, which is a prediction problem. If AI is helping with diagnosis, that has potential to make nurses more central to how we structure the system. But that’s going to require all sorts of changes, and we have to get used to that as patients. And so, while I think the 30-year vision for what healthcare could look like is extraordinary, the five-year timeline is really, really hard.

When I was appointed Rotman Chair of AI in Healthcare, I looked into the ground truth data on how much AI there is in healthcare across the economy, and what fraction of jobs in each industry require data science and machine learning skills. And, no surprise, at the top were the information industries like Google and others, what you would expect. And near the bottom, right beside accommodation and food services, was healthcare. Healthcare is an extraordinary laggard in hiring for machine learning services, and I’ve written on several reasons why that might be true. It might be a data issue: perhaps hiring isn’t where the action in healthcare is. But there are a whole bunch of other reasons in terms of who’s making decisions. People tend to avoid decisions that replace their own jobs—whether it’s selfish and cynical or they see how their jobs are so important is an open question, but we definitely see that doctors are less likely to adopt technologies that replace doctors.

 

SRI: What are some of the other important barriers to AI adoption?

Goldfarb: A lot of the challenges to AI adoption come from ambiguity about what’s allowed or not in terms of regulation. In healthcare contexts, we are seeing lots of people trying to identify incremental point solutions that don’t require regulatory approval. So, we may have an AI that can replace a human in some medical process, but to do it is going to be a 10-year, multibillion-dollar process to get approval. And so, they’ll implement it in an app that people can use at home with a warning that it’s not real medical advice. The regulatory resistance to change, and the regulatory ambiguity of what’s allowed, is a real barrier. As we start thinking about system changes, there is an important role for government through legislation and regulation, as well as through its coordinating function as the country’s biggest buyer of stuff, to help push us toward new AI-based systems.

There are also real concerns about data and bias, especially in the short run. We wrap up the last chapter of Power and Prediction with a discussion about how AIs today reflect human biases. AI can amplify existing biases, and can shine light on them, depending on the content. However, in the long run, I’m very optimistic about AI to help with discrimination and bias. Many researchers, including at the Schwartz Reisman Institute, have worked to demonstrate these biases in AI systems, and it’s very hard to prove conclusively that there are biases embedded in human processes. And so, while a lot of the resistance to AI implementation right now is coming from people who are worried about bias, I think that pretty soon this will flip around, and the resistance will come from people who benefit from bias. If you’re the kind of person who benefits from discriminatory human processes, why would you want an AI making decisions? That’s just going to hurt you, and help all those people who face discrimination.

There’s a story in Major League Baseball we discuss in the book, where they brought in a machine that could say whether a pitch was a strike or a ball, and the people who resisted it most effectively turned out to be the superstars. Why? Well, the best hitters tended to get favored by umpires and face smaller strike zones, and the best pitchers also tended to get favored and had bigger strike zones. The superstars benefited from this human bias, and when they brought in a fair system, the superstars got hurt. And they fought back and won—for a while, they got rid of that system. So, we should expect that people who benefit from bias are going to resist these well-measuring machine systems that can overcome bias.

SRI: What do you look for to indicate where disruptions from AI innovation will occur?

Goldfarb: We’re seeing this change already in a handful of industries tech is paying attention to, such as advertising. Advertising had a very Mad Men vibe until recently: there was a lot of seeming magic in terms of whether an ad worked, how to hire an agency, and how the industry operated—a lot of charm and fancy dinners. That hasn’t completely gone away, but advertising is largely an algorithm-based industry now. The most powerful players are big tech companies, they’re no longer the historical publishers who worked on Madison Avenue. We’ve seen the disruption—it’s happened.

Think through the mission of any industry or a company. Once you understand the mission, think through all the ways that mission is compromised because of bad prediction. Once you see where the mission doesn’t align with the ways in which an organization is actually operating, those are going to be the cases where either the organization is going to need to disrupt themselves, or someone’s going to come along and do what they do better.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

ChatGPT is a “game changer” for artificial intelligence

Next
Next

Upcoming SRI Seminars explore the societal implications of AI systems