New SRI white paper explores AI regulation through existing financial consumer protections
The use of artificial intelligence (AI) at financial institutions is on the rise in Canada. The Big Six banks, other financial enterprises, and third-party service providers have already launched initiatives to incorporate AI technologies into their businesses and apply them to an ever-increasing number of consumer-facing applications. With the imminent leap towards open banking and a rapidly growing financial technology ecosystem, Canadians can expect to have AI embedded in an increasing number of their interactions with financial institutions.
In the hands of financial institutions, AI technologies have the potential to significantly benefit Canadian consumers, providing them with more agency to choose products that work for them and more tailored customer experiences. However, there are also significant risks involved in the adoption and proliferation of these systems. Without fit-for-purpose AI regulation, consumers could be left without recourse to contest automated decisions, question marketing practices, or make informed decisions regarding AI-powered investment algorithms.
To help maximize the benefits of AI approaches in financial services while mitigating risk to consumers, the Schwartz Reisman Institute for Technology and Society (SRI) has published a new white paper (PDF), written by University of Toronto Faculty of Law JD candidate Isabelle Savoie, that identifies how synergies between AI regulation and existing consumer protections can be leveraged.
The report’s findings establish that regulators do not need to start from scratch when it comes to developing policy for AI governance. Existing provisions in the Bank Act, particularly the updated Financial Consumer Protection Framework (the Framework)—introduced by Bill C-86 in 2018, and in force as of June 2022—can be adapted to regulate client-facing AI use in the Canadian financial services sector. The applicability of the Bank Act to AI use has since been endorsed by Innovation, Science and Economic Development Canada in its AIDA Companion Document.
➦ Read the Schwartz Reisman Institute’s new white paper, “Mitigating material risk: Leveraging consumer protection legislation as a regulator of client-facing AI in the Canadian financial services sector” (PDF)
Regulating AI in financial services in Canada
At present, there are no enforceable AI regulations for the Canadian financial sector. The current regulatory landscape governing the use of AI in consumer banking services is a sparse patchwork of non-binding white papers and reports. For example, the Office of the Superintendent of Financial Institutions (OSFI) has issued Guideline E-23: Enterprise-Wide Model Risk Management for Deposit-Taking Institutions, which provides recommendations for AI oversight in financial institutions.
Meanwhile, the Artificial Intelligence and Data Act, introduced as part of Bill C-27 in June 2022, represents the Canadian federal government’s first comprehensive attempt at regulating AI. If passed, this legislation would impose specific requirements on developers and providers of AI products and systems. In particular, AI systems that are categorized as “high-impact” would have to comply with more stringent obligations. Although AIDA’s precise impact on the financial services sector is uncertain, some effect in the financial services sphere is inevitable, especially given that the definition of “harm” includes “economic loss to an individual.”
Consumer protection legislation as AI regulation
The use of AI in consumer markets generally—but particularly within financial services—leads to a new and powerful form of information asymmetry between individuals and financial institutions. Notably, in a 2019 submission to the Department of Finance, the Office of the Privacy Commissioner of Canada cited the use of AI in financial services as an area “requiring more attention,” particularly with regard to institutional transparency and accountability. Key risks to consumers raised by the increased adoption of AI by Canadian financial institutions include model bias, model inaccuracy, and lack of transparency. These can have adverse impacts on, for example, credit decisioning, automated investing, and individualized product marketing.
As of June 2022, Bill C-86 introduced key consumer protection amendments to the Bank Act. Intentionally applying the Framework to regulate high-risk client-facing uses of AI is likely to provide an adequate stopgap to mitigate these risks until they can be addressed either by federal AI legislation, such as AIDA, or other finance-specific AI regulation. Several provisions introduced by the Framework support key elements of AI regulation including transparency, non-discrimination, oversight, and accountability. These present novel opportunities to regulate AI through existing consumer protection mechanisms.
“As AI approaches continue to proliferate, consumers should not be made to wait for legislation to regulate the institutions they rely on.”
Benefits to financial institutions and consumers
Beyond its ability to mitigate some of the key risks posed by client-facing AI systems, using the Framework to regulate AI use in the financial sector would be beneficial to consumers, financial institutions, fintech firms, and lawmakers for several reasons.
First, the solution is simple and effective. It is easier to leverage and tailor an existing regulatory framework than it is to craft and implement a new one from scratch. Given the pace at which AI deployment in the financial sector is growing, the industry and its consumers simply do not have that amount of time to wait and see what uses will be deemed acceptable or not.
Second, the application and possible extension of the Framework to AI-enabled services would allow for increased experimentation with financial AI tools, leading to some potential quick wins and benefits for financial institutions, consumers, and the Canadian innovation economy before the outcome of the AIDA debate is known.
Third, from a broader policy perspective, applying and enhancing the Framework provisions discussed above to regulate AI systems would also be beneficial in bringing Canada’s financial consumer protection framework in line with some of the key principles from the OECD Task Force on Effective Approaches for Financial Consumer Protection in the Digital Age. This task force recommends that AI algorithms deployed in the financial services space be designed to produce outcomes that are objective, consistent, and fair for financial consumers. Adherence would signal Canada’s intent to become a global leader in the space.
The way forward
As AI approaches continue to proliferate in the financial sector, consumers should not be made to wait for federal legislation like AIDA to regulate the institutions they rely on for their financial needs. Discussions regarding AI regulation in the Canadian financial sector should focus on recognizing and remediating the deficits of the current financial consumer protection legislation with regards to AI, while also understanding how the Framework and AIDA might act as complementary regulatory regimes. This is the best first step that should be taken to better protect customers of Canada’s financial institutions from material AI-driven risks to their financial health.
For more in-depth analysis on the ways in which AI regulation and consumer protection principles work together, the extent to which the Framework adequately protects against AI risks and extended discussion regarding the practicability of such a solution, read the new white paper by the Schwartz Reisman Institute.