To guarantee our rights, Canada’s privacy legislation must protect our biometric data
Much ink has been spilled dissecting Bill C-27, Canada’s Digital Charter Implementation Act, which was proposed in June 2022. The bill includes the Consumer Privacy Protection Act (CPPA), which revises guidelines around the private use of citizens’ data, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA), which will become the first legislation governing artificial intelligence (AI) in Canada.
While the Consumer Privacy Protection Act is set to replace Canada’s outmoded Personal Information Protection and Electronic Documents Act (PIPEDA), many experts have scrutinized its shortcomings. Commentaries from researchers at the Schwartz Reisman Institute have noted the CPPA’s weak definition of privacy, its impacts on youth rights and platform governance, and its failure to engage with collective data rights and Indigenous data solidarity.
Despite these critiques, there are additional concerning factors regarding the CPPA to consider, which groups such as the Right 2 Your Face coalition are working to bring to public attention. Amidst today’s broad social impacts of datafication, we must pay specific attention to the risks posed by the collection of biometric data and how this information can be leveraged by facial recognition technology. The CPPA’s neglect of the risks posed by facial recognition technology—most notably, the challenges these tools pose for human rights—suggests that the bill has an unstable grasp on our tricky technological present.
What is facial recognition technology?
Facial recognition technology (FRT) aims to identify individuals through their facial features, extracting biometric information from live or archived recordings and leveraging AI to make predictions. If the system is confident enough that a person’s features from Picture A are similar enough to a person’s features from Picture B, the system will flag the comparison as a match.
Uses for FRT abound in Canada’s public and private sectors. Law enforcement agencies are among the most concerning users, with the Toronto Police Service, Ontario Provincial Police, Royal Canadian Mounted Police, and dozens of departments across Canada embroiled in controversies surrounding their under-the-radar use of Clearview AI. Disturbing stories have emerged about the use of FRT in Canada’s immigration systems, such as an African refugee claimant whose status was revoked after her driver’s license was mismatched. FRT is used by Canadian schools for monitoring students during remote exams, in casinos to deter those with gambling addictions, and even by political parties to pick candidates.
FRT is also used in the private sector to advance commercial interests. In 2020, the Privacy Commissioner of Canada led an investigation into Cadillac Fairview, a commercial real estate company, that found the company was using hidden cameras in its malls to analyze people’s shopping patterns. For every seemingly-innocuous use of FRT—such as Apple and Android enabling it to unlock phones, or L’Oreal using it to help customers model makeup—there are more invasive uses, like Rexall and Canadian Tire leveraging FRT to gather information on customers to identify suspected shoplifters.
Perhaps most disconcerting amidst such uses is the collapsing of public and private boundaries. Law enforcement and national security comprise key clientele for FRT companies like Clearview AI and NEC. Though Canada’s privacy laws are divided along public and private lines, those lines are blurry when AI is in the mix, and the law might struggle to traverse these uncertain barriers given the role public contracts play in developing private sector AI.
“If we continue to accept inadequately regulated facial recognition technology as a fixture of our society today, we risk accepting increasing levels of surveillance as normal.”
Why is facial recognition technology so concerning?
The use of facial recognition technology for monitoring and identifying Canadians raises important concerns for our rights to privacy, especially when existing legislation is too outdated or vague to protect our uniquely identifying biometric information. To guarantee the rights of Canadians, our legislation needs to better protect citizens’ biometric data.
These protections are especially important when we consider the negative impacts FRT can have. Deploying FRT in certain contexts—such as surveilling protests—can deter individuals from speaking and acting freely, dampening one’s right to freedom of association, assembly, and expression. Furthermore, the use of FRT is prone to function creep, in which data intended for one purpose, such as a driver’s license photo, can be leveraged for criminal investigations. This challenges Canadians’ ability to consent to how our data is used—and meaningful consent is part and parcel with meaningful privacy.
There are also important concerns regarding bias and misidentification when it comes to FRT. Though companies are intent on dispelling such claims, researchers continue to demonstrate that FRT can exacerbate racial disparities. As numerous stories have revealed—especially those from people of colour—FRT has led to unjust and unwarranted contact with the criminal justice system due to misidentification.
These factors demonstrate that unrestricted use of FRT can pose a genuine risk to our rights, especially for those who are most vulnerable. If we continue to accept inadequately regulated FRT as a fixture of our society today, we risk accepting increasing levels of surveillance as normal. Establishing FRT as innocuous because it unlocks our phones, or enables playful filters on social media, risks blurring our ability to judge how intrusive FRT really is.
How to craft legislation to meet the challenge
Canada lacks clear and comprehensive legislative frameworks for governing the use of facial recognition technologies. Our public sector privacy legislation is woefully out of date, and recent initiatives to modernize the Privacy Act have seemingly stalled. Canada’s current private sector privacy legislation, PIPEDA, does not provide specific protections for the highly sensitive biometric data that fuels FRT.
That means it falls to Bill C-27 to establish bulwarks against FRT’s encroaching creep. However, Bill C-27 makes no explicit mention of FRT and is ill-equipped to protect against its use. This massive oversight means that the CPPA cannot keep pace with the threats FRT poses to human rights, equity, and fundamental freedoms such as the right to privacy, freedom of association, freedom of assembly, and the right to non-discrimination.
If new technologies are to truly benefit society, we need clear legislation to prevent FRT from scuppering human rights. Through my recent advocacy work as part of the Right 2 Your Face coalition, I believe that there are three key ways Bill C-27 can accomplish this. First, we need to define biometric information as sensitive and in need of stronger protection. Second, legislators must remove the carveout for private entities to use FRT under the auspices of “legitimate business purposes.” Third, we need to bolster Bill C-27’s acknowledgment of individual harm with provisions for collective harm.
“Not all personal information is sensitive, but all sensitive information is personal, and Bill C-27 should recognize this reality to prevent misuse of facial recognition technology and abuses of the data that feed it.”
Biometric data is sensitive data
Bill C-27 defines “personal information” as “information about an identifiable individual.” This definition pales in comparison to the EU’s General Data Protection Regulation (GDPR), which defines personal information as names, IDs, location data, online identifiers, or “factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity” of a person.
The degree to which information qualifies as “sensitive” often depends on context, but there are special categories of information whose collection, use, or disclosure carries specific risks. Information considered sensitive under the GDPR includes health and financial data, ethnic and racial origins, political opinions and religious beliefs, genetic and biometric data, and sexual orientation. In Canada, sensitive information is not explicitly codified under PIPEDA; instead, this falls to the Office of the Privacy Commissioner of Canada to issue guidance and offer safeguards—but this guidance does not hold the weight of law.
The term “sensitivity” appears often throughout the CPPA, yet it remains undefined in the Bill’s glossary. Bill C-27 should follow global standards and explicitly define sensitive information to capture the above-mentioned categories, with an emphasis on biometric information, which is at the core of an individual’s identity. The EU AI Act is already ahead of the curve on this, explicitly defining biometric data in a way that acknowledges its sensitivity, its unique capacity to identify a person, and the importance of consent in systems that identify based on “physical, physiological, behavioural and psychological human features” (see Amendments 21 and 22).
The CPPA’s failure to capture biometric data as sensitive information leaves far too much up to interpretation, and may lead businesses to establish inadequate protections—or none at all—for information that merits stronger safeguards. Without this definition, other sections of the CPPA—such as 53(2) and 62(2)(e), which refer to retention periods for sensitive personal information, or 57(1), which pertains to establishing safeguards proportionate to the sensitivity of the information—are left open to interpretation, with facial data suspended in limbo and individual privacy rights in a vulnerable position. Not all personal information is sensitive, but all sensitive information is personal, and Bill C-27 should recognize this reality to prevent misuse of FRT and abuses of the data that feed it.
Doing away with businesses’ “legitimate interest”
Recognizing biometric information as sensitive will also go a long way in plugging potential holes in the CCPA’s “legitimate interest” exception. The CPPA states in provision 18(3) that purposes for which businesses have a “legitimate interest” may be, once weighed against “adverse effects,” appropriate to collect information without a user’s knowledge or consent. However, without a definition for “adverse effects,” it is not difficult to see businesses framing their use of FRT in service of purposes like loss prevention—which is already happening despite violating Canadian privacy law. Unfortunately, the CPPA’s “legitimate interest” loophole for acquiring individual consent tilts the scales in favour of the private sector, suggesting individuals’ privacy rights are less important than corporate profit.
In order to bulwark ourselves from the risks of FRT, our individual rights and freedoms must be adequately considered and given priority. This means preventing businesses from having free reign to decide whether their use of FRT qualifies as legitimate. It means concretely acknowledging the risks that come with FRT and the threats it poses to privacy rights as “adverse effects.” It also means better protecting biometric data, and recognizing that biometric data’s inherent sensitivity means it can never be collected without knowledge or consent. Firmly codifying our right to privacy, and doing away with a carveout that could allow businesses to gather and use biometric data in FRT systems without consent, will go a long way toward centering human rights in our increasingly datafied world, establishing stronger protections against the misuse of powerful biometric systems. And since there are already regulations in the EU AI Act (see Amendments 51 and 52) that explicitly prohibit “indiscriminate and untargeted scraping of biometric data [...] to create or expand facial recognition databases” which recognize the potential for this to “lead to gross violations of fundamental rights, including the right to privacy,” and which otherwise flag these systems’ shortcomings, Canada has a long way to go to make Bill C-27 more rights-respecting while insulating citizens from FRT’s harms.
Looping in collective harm
AIDA defines harm exclusively on an individual level and disregards the role of AI systems in causing group-based harms. By only codifying this individualistic conception, AIDA fails to recognize the full spectrum of risks associated with AI, such as collective and societal harms.
There are a few ways to do this, but two come to mind. First, expanding AIDA’s definition of harm to encompass broader, rights-based harms will be essential to preventing negative encroachments of FRT. Second, adopting rights-based language in AIDA’s definition of “high-impact systems” will clarify these systems pose risks for equality rights through their potential for biased and discriminatory decisions.
AI and FRT systems can contribute to collective harms by exacerbating racial discrimination, economic inequities, and other social biases. FRT systems are consistently less accurate for equity-seeking groups, such as racialized individuals, children, elders, members of the LGBTQ+ community, and disabled people. Further, FRT systems may rely upon datasets with underlying or inherent biases, so when implemented in a real-world context, they can perpetuate and amplify existing discrimination against vulnerable or racialized groups.
Although AIDA incorporates the concept of biased output within its definitions, the concept is not put to good use. Specifically, AIDA’s definition includes an exception that weighs whether an outcome that “adversely differentiates, directly or indirectly” towards a person based on prohibited grounds of discrimination has adequate justification. There are many problems here. First, the definition and purpose of the phrase “without justification” is undefined. Second, this definition does not acknowledge the harm that biased FRT can cause to entire groups and communities and their right to be free from discrimination. Third, and most importantly, it makes the unacceptable and anti-rights suggestion that there are instances in which a biased output against equity-seeking groups can be justified. This makes the inclusion of collective harm in AIDA—through an expanded conception of harm, and through a definition of a “high-impact system” that engages with equality rights—all the more necessary: those whose equality rights may suffer the most from biased FRT systems deserve more robust protections from them.
Towards rights-focused legislation
In a recent appearance before the House of Commons Standing Committee on Industry and Technology, Minister of Innovation Francois-Phillipe Champagne stated that his office had amendments to Bill C-27. These include acknowledging AI systems that use biometric data as “high impact” under AIDA (though there is still no definition of “biometric data” and no added protections under the CPPA), as well as an amendment to recognize privacy as a fundamental human right. It is essential that Bill C-27 put privacy in the foreground—this much is clear. However, Canadians deserve even better than this: Bill C-27 must explicitly engage with all human rights that powerful technologies like FRT can put at risk. The Right 2 Your Face coalition will be increasingly active on this topic while Bill C-27 is under review.
Given that PIPEDA is more than 20 years old and the Privacy Act is woefully out of date, we must be forward-thinking and ambitious when it comes to new legislation to protect Canadians’ data. By ensuring that our legislation is up to the challenge of regulating technologies like FRT, we can help protect our individual rights, support the collective needs of those who are most vulnerable across our society, and ensure that advanced technologies benefit everyone—or at least don’t hurt anyone.
Want to learn more?
Learn more about CCLA’s Privacy, Technology & Surveillance program.
Read an op-ed by Daniel Konikoff about the need for a greater rights focus in Bill C-27.
Read an interview with Wendy H. Wong interview on human rights in the age of datafication.
Read the other commentaries in our C-27 series:
About the author
Daniel Konikoff is a PhD candidate at the University of Toronto’s Centre for Criminology & Sociolegal Studies and a graduate affiliate at the Schwartz Reisman Institute for Technology and Society. He is the interim director of the Privacy, Technology, and Surveillance Program at the Canadian Civil Liberties Association and a member of the Right 2 Your Face coalition’s steering committee.