Innovating care: Exploring the role of AI in Ontario’s health sector

 
Computer generated image of an abstract collage comprised of health care related objects and people.


A recent panel hosted by the Schwartz Reisman Institute explored how AI is transforming Ontario's healthcare sector, highlighting its potential to improve care and exploring pressing challenges around patient involvement, health equity, and trustworthy implementation.


As artificial intelligence (AI) technologies are increasingly adopted and applied in healthcare, interrogating the implications for equitable care and health outcomes is. With the rapid expansion of AI, how are these tools being applied in Ontario’s health sector? What are the opportunities for interdisciplinary collaboration in healthcare? What are the unintended consequences of AI use in healthcare and how can these be mitigated?

A recent panel discussion hosted by the Schwartz Reisman Institute featuring subject matter experts Mamatha Bhat, Daniel Buchman, and Muhammad Mamdani and moderated by inaugural SRI Scholar-in-Residence Luke Stark responded to these questions. The session was part of the full-day Interdisciplinary Dialogues on AI workshop organized by the Institute’s 2023–24 cohort of graduate fellows as part of SRI’s conference, Absolutely Interdisciplinary.

Value-ladened healthcare and AI deployment

While AI is suggested to hold great promise for prediction, diagnosis, and treatment in healthcare, some have highlighted concerns such as the erosion of patient involvement in clinical decision-making and widening health inequities. As Daniel Buchman, associate professor of bioethics at the University of Toronto and scientist at the Centre for Addiction and Mental Health stated, “If AI predictions are considered, or at least hoped, to produce knowledge that is superior to that of expert clinicians, this means that predictions rank higher on what’s called an epistemic hierarchy. [This is higher] than other forms of knowledge, such as professional clinical judgment and patient experiential knowledge.”

AI systems may have greater epistemic weight, with the knowledge generated from such systems perceived to have enhanced credibility exceeding that of clinicians and patients. This “perceived authoritativeness of machine learning” in the clinical decision-making process may exclude patients from this process and is incongruent with the movement for patient-centred care and patient-oriented research. Using examples from mental health, Buchman highlighted how patients may not be seen as valid or credible narrators of their experiences when there is a reliance on AI systems among care providers. 

Clinicians and scientists need to strive to balance the anticipated benefits of AI in healthcare with the real threat of epistemic injustice, a “type of harm done to individuals or groups limiting their ability to contribute to and benefit from knowledge.” This highlights the importance of evidence in scientific decision-making and the need to explicitly address what “evidence” means—including what evidence matters most, and for who and for what decisions. In his presentation, Buchman proposed that epistemic humility may be one approach to counter the omission of patients from AI-enabled clinical decision-making, as it prompts clinicians and scientists to weigh evidence, acknowledge the limitations of currently available data, and consider the social value of the AI systems they are developing and deploying.

 
Headshots of Bhat, Buchman, and Mamdani.

From left to right: Mamantha Bhat, Daniel Buchman, and Muhammad Mamdani.

 

Combining interdisciplinary experience

Involving patients is one consideration for the use of AI in healthcare. More broadly panelists emphasized the need for interdisciplinary collaborations. Amidst growing concerns about AI-enabled clinical decision-making, some view AI as a means for overhauling clinical guidelines, in particular in the field of organ transplant prioritization. Mamatha Bhat, clinician scientist and hepatologist at the University Health Network Organ Transplant Program and principal researcher at the Bhat Liver Lab, indicated that for organ transplants, “the current prioritization and allocation system unfortunately disadvantages female patients, so they have 15 per cent less chance of attracting an organ donor. We needed to do something to address this.”

In this use case, AI technologies can not only provide a more equitable tool but could even use a patient’s medical history as well as subtle changes in their clinical condition to dynamically update the transplant prioritization list, which Bhat observed is “not well reflected by the current prioritization system.” Cumulatively evaluating many hundred years of medical history is simply not realistic for a trained clinician during a time-sensitive decision-making process, and AI tools can support clinicians in ensuring patients at risk of deterioration receive the treatment they require.

Dr. Bhat has worked with hepatologists, computer scientists, and clinicians to design a novel AI-based prioritization system that combines decades of experience, incorporating clinical decision rules and laboratory variables. This highlights how interdisciplinarity is critically important when designing AI models in healthcare. Dr. Bhat agreed that interdisciplinary collaboration is the future of clinical research, noting that “learning each other's language is essential,” and that this will happen through close collaboration.

“We all have to be willing to learn from each other. It's an ongoing learning process,” she observed.

Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto (UHT) and director of the University of Toronto’s Temerty Centre for Artificial Intelligence Education and Research in Medicine (T-CAIREM), reiterated the importance of interdisciplinary collaboration in healthcare. For Mamdani, AI developers must engage in dialogue with clinicians to determine what problems may benefit from AI-derived tools and solutions. He provided the real-world implementation example of Chartwatch, a tool developed by a team of data engineers, data scientists, operations scientists, human factors design specialists, and organizational and behavioural scientists at UHT, which has reduced risk of unanticipated death in hospitalized patients by 26 per cent.

Despite the proliferation of clinical AI models, Mamdani highlighted that most models will not translate to clinical settings due to data limitations and the need for real-time updates. He emphasized that implementation science approaches are key to assessing what works and what doesn’t. Furthermore, there is a “messiness” with the deployment of AI tools, meaning that these tools do not benefit all patients equally, and methods of analyzing their efficacy are still nascent. The UHT team undertakes bias assessments; however, there are currently no standards outlining expectations for assessments that results can be measured against. It is unclear which patients these algorithms work for, and in the Canadian context, there currently are no requirements for such assessments, unlike in the United States, where the US Department of Health and Human Services Office for Civil Rights ruled in May 2024 that healthcare organizations, health insurers, and clinicians are legally responsible for managing risks of discrimination arising from the use of AI tools in healthcare.

 

Panelists engage in Q&A following their presentations at Interdisciplinary Dialogues on AI. Photo: Johnny Guatto.

 

Thoughts for the future

The future of healthcare in AI is likely to include a significant increase in the number and types of tools deployed into real-world practice. With this comes a host of new problems requiring new solutions. Who is liable for a treatment decision derived from an AI tool? What liability concerns arise if a patient has a negative health outcome when a clinician adheres to versus does not adhere to an AI-tool recommendation? How do clinicians and scientists evaluate AI tools to ensure their performance within different subgroups is fair? How do clinicians and scientists ensure that the knowledge and evidence used for treatment recommendations are balanced appropriately for the clinical situation, and do not disproportionately rely on an AI recommendation?

Central to many of these issues is the trustworthiness of AI systems. What factors determine whether an AI system is trustworthy? Breadth and disclosure of training data, external validation, and model interpretability are important in building trustworthy models, but end-user familiarity, expectations, communication, and education may be equally vital.

These challenges will not easily be overcome and require those developing, deploying, and evaluating AI tools in healthcare to, as Mamdani said, “have the courage to do something different that nobody else is doing.”

Watch the recording of this session:


About the authors

Jo-Ann Osei-Twum is a Black scholar, researcher, and organizer, with experience working in academia, government, healthcare, and the not-for-profit sector. She is currently completing her doctoral studies in epidemiology at the Dalla Lana School of Public Health, University of Toronto, with an emphasis in artificial intelligence and data science. Her interests are in applied public health, population health interventions, and critical approaches to data science. Osei-Twum’s graduate research focuses on equitable and effective applications of artificial intelligence for population health.

Michael Colacci is an internal medicine physician completing his PhD in clinical epidemiology at the University of Toronto. His research focuses on the safe and equitable development and deployment of machine learning tools within internal medicine.

Felix Menze is a PhD candidate at the University of Toronto’s Department of Medical Biophysics and specializes in neuroscience and data analysis. His main research focus is the analysis of neural activity in virtual reality to develop new ways of interpreting how the human brain operates during complex challenging tasks, such as interacting with technology. He is currently conducting and analyzing simulated driving tasks to answer how drivers diagnosed with mild cognitive impairment can regain autonomy through equitable fitness to drive assessments.


Related Posts

 
Previous
Previous

Humans and LLMs: Partners in problem-solving for an increasingly complex world

Next
Next

What do we want AI to optimize for?