What happens when we become data? Wendy H. Wong explores the consequences of datafication

 
In an op-ed for The Globe and Mail, SRI Research Lead Wendy H. Wong examines the issues at stake around facial-recognition technology, and their impacts on human rights and consent. Wong argues the “stickiness” of data alters basic conceptions of autonomy and dignity in ways that “profoundly change human experience”—leading to a need to reconsider the framework of human rights for the digital era.

In an op-ed for The Globe and Mail, SRI Research Lead Wendy H. Wong examines the issues at stake around facial-recognition technology, and their impacts on human rights and consent. Wong argues the “stickiness” of data alters basic conceptions of autonomy and dignity in ways that “profoundly change human experience”—leading to a need to reconsider the framework of human rights for the digital era.


“Our faces are, in many ways, who we are to the world,” writes Wendy H. Wong in a recent op-ed for The Globe and Mail. “They are the way people identify and remember us. When we digitize our faces, we become data—out there forever.”

In today’s world of widespread surveillance and advanced facial-recognition technologies, the status of our faces has shifted. Formerly linked only to the intimacy of our discrete physical bodies, faces now circulate as electronic data within the broader public sphere of digital mass communications.

One of the eight inaugural research leads at the Schwartz Reisman Institute for Technology and Society (SRI), Wong is a professor of political science at the University of Toronto and a Canada Research Chair in Global Governance and Civil Society. Her research focuses on the need to redefine human rights in the era of digital technology, interrogating how datafication is reshaping the limits and extensions of human experience, agency, and responsibility.

In her op-ed, Wong considers a recent case raised before the Office of the Privacy Commissioner of Canada (OPC), in which Clearview AI—an American facial-recognition technology company that provides its tools to law enforcement, including in Canada—was charged with violating the Privacy Act. The OPC summarized the findings of its inquiry in a report, released in February 2021, which calls on Clearview AI not only to stop offering its services in Canada, but also to stop collecting biometric data of Canadians and to delete its existing collection. Privacy Commissioner Daniel Therrien described Clearview AI’s practices as constituting a form of illegal mass surveillance, creating an “unacceptable” system in which “millions of people who will never be implicated in any crime find themselves continually in a police lineup.”

The OPC’s report is among the first instances of a national government pushing back against the risks of data-informed mass surveillance, although it is not alone. Clearview AI has recently made headlines in the EU for complaints that its data harvesting practices violate European privacy laws. And, in 2020, the company was sued by the American Civil Liberties Union for violating the Illinois Biometric Privacy Act.

Facial-recognition technology, Wong observes, “takes something very important away from us: control over our own faces.” The impacts of this power asymmetry are often most prominently felt among groups already subject to discrimination—for instance, it has been demonstrated that facial-recognition technologies tend to reinforce, rather than reduce, racial profiling of Black men by police.

AS WONG OBSERVES, “Data can be copied, transferred, and analyzed indefinitely. We may never know where it goes, who has it, or what purpose it serves. In the age of datafication, it has become almost impossible to take someone’s ‘consent’ as meaningful.”

In her research, Wong explores how our lives are being actively reshaped by the advance of digital technologies, which convert our activities, movements, and relationships into data points that can be leveraged by corporations. In effect, she contends, it changes who we are as humans.  Ownership of this data is a point of contention, challenged by conflicting narratives: on the one hand, data is an extension of users; on the other hand, it is collected and aggregated by corporations that assert control over all materials generated within their platforms.

In the case of Clearview AI, which does not actively collect data from users, but instead scrapes its content—over three billion images worth—from materials publicly available on the internet, the question is whether it's permissible to do so without the consent of the individuals from whom the data originates. However, in the context of other platforms and services, which solicit consent through agreements that are often opaque and all-encompassing—and sometimes even ignored, as Ron Deibert, professor at the Munk School of Global Affairs & Public Policy and in U of T’s Department of Political Science, as well as Director of the Citizen Lab, observes in his 2020 Massey Lecture—the issue is more complex.

As Wong observes, “Data can be copied, transferred, and analyzed indefinitely. We may never know where it goes, who has it, or what purpose it serves.” Consequently, she asserts, “In the age of datafication, it has become almost impossible to take someone’s ‘consent’ as meaningful.”

The “stickiness” of data—its reproducibility and shareability—fundamentally alters basic conceptions of individual autonomy and dignity in ways which, Wong argues, “profoundly change human experience.”

Wendy Wong

Amidst this context, Wong highlights the existing framework of international human rights as an essential guide to articulate “what needs to be minimally ensured to protect our autonomy and dignity.” In her op-ed, Wong proposes that we must go “back to the basics” of interrogating the fundamental formulations of human rights, “even if it means changing some and creating new ones.”

Wong explored such challenges further in her recent contribution to SRI’s weekly seminar series, “Rebooting human rights in a datafied world,” in which she examines the distinctions in autonomy and rights between our physical bodies and our data footprints and identities, which she terms “Data You.” As Wong demonstrates, the increasing significance of “Data You” poses numerous challenges for how human rights are formulated and implemented.

These issues form the basis of Wong’s ongoing SRI research project, “Human Rights in the Digital Era,” which seeks to explore the challenges of data governance from a human rights perspective in response to digital technologies, Big Data, and artificial intelligence. Converging analyses from the perspectives of political science, international relations, and social sciences more broadly, Wong’s research is a prime example of the work that the Schwartz Reisman Institute exists to support.

As Wong concludes in her Globe and Mail op-ed, we don’t yet have an answer to who owns the data from our faces. However, by utilizing a human rights-oriented framework that centres values such as human autonomy, dignity, and freedom, Wong demonstrates how we can begin to formulate a response that prioritizes human-centred solutions to the complex questions around privacy and consent.

 

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Privacy study sheds light on why we grant or deny app requests

Next
Next

Bill C-11 and the changing climate in Canadian federalism