New report explores opportunities and challenges for AI innovation in financial services
Financial institutions are using artificial intelligence (AI) for a wide range of applications today, including high-stakes areas such as risk assessment, strategic decision-making, and security. This increased use and development of AI technologies is setting the stage to revolutionize the financial services landscape, generating increases in efficiency, reliability, and robustness.
One key aspect in the application of new AI techniques is the development of responsible AI techniques: a range of interconnected codes and practices that include a focus on fairness and understanding the impacts of AI systems across all stages of development. But a clear definition of responsible AI and the components necessary to achieve it are concepts that stakeholders—including private firms, researchers, and policymakers—are still exploring.
Regulation and standards are also vital tools for driving AI innovation and adoption. Canada’s recently-tabled Digital Charter Implementation Act (Bill C-27), and the United States’ Blueprint for an AI Bill of Rights demonstrate today’s regulatory frameworks are a fast-changing landscape with transformative potential. However, these frameworks are still in their early stages, and clear standards remain largely undefined, leaving players in a field where they must create their own frameworks for responsible development.
While it’s clear AI has the potential to transform everything from the way we work to our services and infrastructure, what are the next steps required to support AI innovation? How can we accelerate the development of AI in a responsible way that will unlock economic growth and, more importantly, help create a better society for everyone?
Adopt. Innovate. Regulate?
On June 14, 2022, the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto and the Deep Tech Venture Fund at the Business Development Bank of Canada (BDC) co-hosted a roundtable entitled “Adopt. Innovate. Regulate?”, which focused on innovative approaches to the development and regulation of AI in financial services.
The event brought together stakeholders from financial institutions, leading startups, academics, and representatives from government to explore how AI is currently used in financial services, develop a common understanding of the significance of responsible AI practices and their implementation, uncover challenges faced by businesses seeking to integrate AI in existing practices, highlight emerging solutions, and discuss the current and future status of AI regulation in Canada and globally.
Insights from the roundtable are collected in a new report (PDF), published today by SRI, which summarizes the discussions and key takeaways from each panel and offers insight into the current and future state of AI innovation in financial services.
New frameworks, new tools, new approaches
From among the many insights that came forward through the roundtable’s engaging conversations, the new report from SRI highlights four key takeaways for policymakers, financial institutions, researchers, and other interdisciplinary stakeholders.
First, practicing responsible AI means more than ensuring that an AI system is “explainable,” and involves a wide range of consultations at every phase. Ensuring AI systems are responsible means ensuring they are justified, with the opportunity to argue against undesirable outcomes.
Second, common definitions are vital to enable trans-disciplinary work, expand opportunities, and minimize risk. Companies and regulators cannot certify a system as “fair” or “responsible” unless they have a common framework defining these terms.
Third, responsible AI tools need to be democratized to enable broader implementation. Quality assurance needs to be accessible for everyone, not just organizations with the most resources.
Finally, regulators should consider new frameworks and tools to tackle the fast pace and unique risks posed by AI. While too much regulation can impede innovation, little or no regulation can also stifle growth.
The report also explores the current status of how AI is used in the financial services sector, the growing ecosystem of third-party providers offering responsible AI tools, core principles to hold automated systems accountable, emerging regulatory solutions, and what developments might be anticipated within the next five years.
Next steps for innovation
While AI presents an exciting and game-changing development for businesses, several challenges must be addressed before its innovation can truly flourish at a system-wide level.
Alongside the insights that responsible AI techniques will form a necessary baseline for institutions to retain and ensure trust, it was also identified that clear frameworks—such as an integrated assurance ecosystem and consistent standards to enable certification—are also required. Panelists pointed to the need for emerging regulation to be informed by sufficient technical knowledge and experience, and that key value-based terms such as “fairness” require clear, shared definitions developed through consultations with a range of stakeholders.
Another central challenge is striking the right balance when it comes to regulation. Principles and guidance that are specific enough to prevent confusion while remaining broad enough to be adaptable to the fast pace of technological advancement will be key to supporting an innovative AI economy.
“Our roundtable was a positive step forward in bringing together a wide range of perspectives to discuss the current state of AI innovation in finance, with many inspiring conversations between experts in different fields,” observed SRI Executive Director Monique Crichlow, who participated in the event as a moderator. “However, much more is needed to solve the challenges of implementing and regulating AI to truly harness its potential. It’s our goal at the Schwartz Reisman Institute to continue moving that conversation further.”
With photos by Josh Fee.