How will Bill C-27 affect the governance of online platforms?

 
A crowd of people holding cellphones.

How will Bill C-27’s Consumer Privacy Protection Act change the obligations of large internet platforms and search engines, especially with respect to their ability to target advertising and recommend content? Guest contributor Matthew Marinett, a PhD candidate at the University of Toronto’s Faculty of Law, notes that many open questions remain around the new legislation that hinge on the interpretation of platforms’ necessary and legitimate interests. Photo: Robin Worrall/Unsplash.


Canadians are increasingly aware that large internet services and advertising networks harvest and use personal information in ways that threaten individual privacy. Numerous high-profile data breaches and negligent disclosures, such as Facebook’s Cambridge Analytica scandal, have revealed the dangers that can result from the collection and handling of large amounts of personal information by internet firms. Often, these firms make use of customer data in ways that can be potentially damaging, such as when personal information is used to target misleading advertisements at racial minorities.

Collection of personal information by private firms is driven by a business model based in targeted advertising—an approach that is sometimes called “surveillance capitalism.” The core concept is that the more that corporations know about their users, the more they can profit from enabling advertisers to target specific individuals. Whether or not this kind of advertising is broadly effective is an open question, but it underlies the profits of many of the largest internet firms.

To date, Canada’s existing private-sector privacy legislation, the Personal Information Protection and Electronic Documents Act (PIPEDA), has often lacked teeth in enforcing proper data collection, use, and disclosure practices for internet companies. However, Bill C-27 is set to overhaul Canada’s privacy regime by replacing the privacy-related parts of PIPEDA with the Consumer Privacy Protection Act (CPPA), which targets many harms of online data harvesting and comes with a new set of sanctions for breaches to potentially force companies into more stringent compliance with the requirements.  

How much will the CPPA increase the protection of personal information, or change the behaviour of internet firms, when compared with PIPEDA? In short, the answer is: certainly a little bit, and maybe a lot. Indeed, the CPPA might impact the core surveillance capitalism model of many large online social media and search firms. The longer answer is, of course, more complex.

How the Consumer Privacy Protection Act will impact online platforms depends on interpretation, writes Matthew Marinett, a PhD candidate at U of T’s Faculty of Law.

Several aspects of the new legislation will remain the same. Most importantly, the CPPA will not deviate from PIPEDA in the central role given to informed consent in the protection of personal information. The basis of both acts is that an organization cannot collect, use, or disclose personal information without the consent of the individual about whom that information pertains, with limited exceptions. The definition of personal information also does not change: it continues to mean “information about an identifiable individual.” This includes much of the information collected for the purposes of targeted advertising and content recommendations. Finally, the scope of the legislation does not change: it will continue to apply primarily to private-sector organizations that collect, use, or disclose personal information for commercial purposes.

However, there are several other significant areas in which the CPPA will revise aspects of previous data and privacy regulations under PIPEDA. These include allowable purposes of data collection, disclosures and exceptions pertaining to consent, a right to deletion, and a right to explanation. And, while some of these revisions are clear from the proposed legislation, others will be determined by interpretation.

Allowable purposes of collection

Like PIPEDA, the CPPA specifies that an organization cannot collect, use, or disclose any personal information except “for purposes that a reasonable person would consider appropriate in the circumstances.”

However, based on case law that had interpreted the reasonable person provision in PIPEDA [1], the CPPA now qualifies this limitation with a list of factors to consider when determining whether the manner and purposes of collection, use, or disclosure are appropriate. These culminate in a balancing act in which the organization must consider “whether the individual’s loss of privacy is proportionate to the benefits in light of the measures, technical or otherwise, implemented by the organization to mitigate the impacts of the loss of privacy on the individual.” Given that an organization is permitted to consider its “legitimate business needs” within the analysis, the CPPA potentially expands the range of appropriate purposes.

In the context of social media companies, it’s possible that collecting information for targeted advertising will no longer be considered appropriate in light of these new factors. However, it is more likely the opposite conclusion will hold on the strength of the business needs of advertising-based firms. Indeed, given that the Office of the Privacy Commissioner of Canada previously released a policy position finding that targeted advertising is a “reasonable purpose” under PIPEDA, this seems unlikely to change. Similarly, for search engines like Google, using your information to customize search results and serve ads may likewise be appropriate. It is therefore probable that most internet firms will continue to meet this threshold requirement.

“What is a necessary or legitimate interest in the case of online platforms? […] And what does it mean to influence behaviour or decisions?”

Consent disclosures for online platforms

One of the CPPA’s central changes is a move from the set of more flexible standards in PIPEDA to more clearly-worded and specific requirements. While the requirements of consent change little from their previous incarnation in PIPEDA, the clarity with which these requirements are set out in the new Act will force some changes to privacy policies and disclosures by internet platforms.

Among the largest of these clarifications is that disclosures for valid consent must include an express statement of the reasonably foreseeable consequences of the collection, use, or disclosure of personal information. Under PIPEDA, the requirement was merely that it was reasonable to expect that a person would understand those consequences, which provided much more wiggle room to leave negative outcomes unmentioned. Under the CPPA, organizations must make clear precisely what they foresee as possible consequences. While it’s unclear how this will be implemented in the case of online platforms, one could imagine this might include setting out the consequences of everything from a data breach to the potential impact TikTok’s recommendation algorithms might have on its users’ mental health!

All of this must also now be set out in “plain language.” Under PIPEDA, the requirement was that disclosures be set out in a manner that an individual can reasonably understand. But the requirement of “plain language” potentially demands a complete rewrite of privacy policies. Given what may be complex uses and consequences —and since lawyers tend to be bad at writing in plain language —this may prove a significant challenge for platforms.

Express consent and its exceptions in the case of online platforms

A few small changes from PIPEDA to the CPPA mean that the new legislation could have a massive impact on the business models of many of the largest online platforms—depending on interpretation.

One of the subtlest but potentially most important changes comes in section 15(7) of the CPPA, with respect to requirements of consent. While PIPEDA specifies an organization cannot require consent as a condition of providing a product or service to anything beyond “explicitly specified, and legitimate purposes,” under the CPPA this would become “what is necessary to provide the product or service.”

While it is possible to argue that recommendation algorithms and targeted advertising are “legitimate”—as mentioned, targeted advertising was found to be a legitimate purpose by the Privacy Commissioner, who also found that Facebook could require consent for them as a condition of service [2]—this change could matter, as it is harder to argue these features are “necessary to provide the product or service.” After all, what exactly is the service that a platform like Facebook offers? Does it need to collect behavioural information to connect you to friends and family? Does its “service” include being recommended content, or served targeted ads? If Facebook requires you to agree to its privacy policy to sign up, which includes using your information for advertising and recommendations, is that valid consent? The answer is unclear.

It’s possible that companies like Facebook do not need consent for such uses. A significant change in the CPPA is a new “legitimate interest” exception which permits the collection or use of information without consent where it is for the “purpose of an activity in which the organization has a legitimate interest that outweighs any potential adverse effect on the individual resulting from that collection or use” and which is reasonable to expect. However, the CPPA would expressly prevent such non-consensual collection and use without consent where it is for the “purpose of influencing the individual’s behaviour or decisions.”

A number of interpretive questions arise here. What is a necessary or legitimate interest in the case of online platforms? Are targeted ads and recommendations reasonable to expect, or could non-targeted advertising be a viable alternative? And what does it mean to influence behaviour or decisions?

Taking these provisions together, questions remain about whether collecting personal information for recommendation and advertising purposes is impacted by these provisions: it’s not clear that platforms can rely on express consent at sign-up, if it is a condition of service, or on the exceptions to consent requirements, since the purposes may be for influencing behaviour. While I find it unlikely that these provisions will prohibit the business models of some of the largest corporations currently in existence—it is a possibility, and something undoubtedly of concern to the platforms themselves.

Consent and search engine indexing

Questions about the permissibility of search engines indexing and displaying personal information have been raised often over the past decade. In the European Union, under the General Data Protection Regulation (GDPR), search engines like Google are considered to be “processing” the personal information of individuals when they index and list pages that contain such information. While the GDPR permits this, the “right of erasure” (commonly known as the “right to be forgotten”) permits individuals to ask a search engine to remove results for their name, should that information be privacy invasive and not in the public interest.

Similar questions have come forward in relation to PIPEDA. In 2019, the Canadian Federal Court found that Google was indeed collecting, using, and disclosing personal information in search [3]. While unaddressed by the courts, under PIPEDA it was unclear whether search engines had the authority to handle personal information in this way. While PIPEDA offers no right of erasure, it also has no clear provision under which a search engine can collect this information without consent—and consent, in most cases, does not exist.

Under the CPPA, a search engine is similarly collecting and using personal information where it indexes a webpage containing that information and displays it in search results. While such collection is clearly done without consent in most cases, it seems likely that a search engine will be able to avail itself of the new “legitimate interest” exception. It will likely have a legitimate interest in collecting personal information that is already present online for the purposes of web search. In most cases, the legitimate interest in collecting this information will outweigh the potential adverse effect on the individual, and this kind of collection is therefore reasonable to expect. Further, this collection is not for the purposes of influencing individual behaviour. Thus, it seems likely that search engines may in fact be better protected by the new rules, at least with respect to listing search results.

Right of deletion

Section 55 of the CPPA provides for a right of deletion, under which an individual can request the removal of personal information held by an organization if the organization has contravened the CPPA, if the individual withdraws consent, or if the information is no longer necessary to the provision of a service requested by the individual. There are a number of potential reasons an organization can refuse such a request, such as if preservation is required by law, or if deleting the requested personal information would necessarily involve deleting someone else’s personal information.

Notably, this does not create a right to be forgotten by search engines, like the right that exists under the EU’s GDPR. Search engines would be collecting information under the legitimate interests exception, and thus would be doing so in compliance with the Act. And, as consent was never obtained, there would be no consent to withdraw. So, unless a person can make the argument for deletion on the basis that their information is no longer necessary to the provision of a service they requested (an odd claim if they never requested a service in the first place), then no right of deletion would be available.

Access to information about decisions, recommendations, and predictions

Under the CPPA, organizations will have to publicly provide a general account of how they use automated decision systems to make “predictions, recommendations, or decisions about individuals that could have a significant impact on them.” They must also, on request by a person whose personal information was used to make such an automated decision, recommendation, or prediction, provide an explanation that would include “the source of the information and the reasons or principal factors that led to the prediction, recommendation or decision.”

This means that organizations must supply some degree of transparency and explanation for their use of automated decision-making systems where a decision, recommendation, or prediction has “significant impact” on a person. However, the term “significant impact” is not defined, and it is unclear if such things as automated social media content moderation would qualify. It seems unlikely that recommendation engines—such as those on YouTube, Spotify, or Amazon—would be found to have a significant impact, even where they use personal information to make such recommendations.

Conclusion

It remains to be seen whether the CPPA will be enacted by Parliament, and if so, whether it will have a significant impact on the business models and services of large internet intermediaries like social media platforms. Regardless, if passed, it’s clear that the legislation will spur an increase in transparency concerning potential privacy risks created by an organization’s data practices, even if the underlying data collection and handling practices do not significantly change.

Given that the CPPA comes with significant new administrative penalties for non-compliance, new powers for the Privacy Commissioner, quasi-criminal prosecutions, and a private right of action for individuals to sue, the new legislation provides a strong incentive for organizations to demonstrate their commitment to collecting, using, and disclosing personal information in revised ways. Internet users can therefore expect some improvement in the behaviour of internet platforms, even if many of the core concerns with surveillance capitalism remain.

Notes

[1] See e.g. Turner et al v Telus Communications Inc. et al, 2005 FC 1601.

[2] Report of Findings into the Complaint Filed by the Canadian Internet Policy and Public Interest Clinic (CIPPIC) against Facebook Inc. Under the Personal Information Protection and Electronic Documents Act by Elizabeth Denham Assistant Privacy Commissioner of Canada, online: https://www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2009/pipeda-2009-008/

[3] Reference re Subsection 18.3(1) of the Federal Courts Act, 2021 FC 723.

Want to learn more?


About the author

Matthew Marinett is an assistant professor at the Ted Rogers School of Management at Toronto Metropolitan University, and a doctoral candidate at the University of Toronto’s Faculty of Law. His research focuses on the regulation of the internet and internet intermediaries with a particular focus on freedom of expression, artificial intelligence, privacy, and intellectual property. 


Browse stories by tag:

Related Posts

 
Previous
Previous

Data rights will not save democracy

Next
Next

How will Bill C-27 impact youth privacy?