From mourning to machine: Griefbots, human dignity, and AI regulation

 
AI-generated image in primary colours featuring a human hand and a woman facing each other along with circles and other fluid geometric shapes around them.

Artificial intelligence programs designed to mimic deceased individuals by using their digital footprint raise significant concerns about data collection and implications to human dignity. This article explores the digital afterlife industry and the plethora of ethical and legal challenges it presents, including a consideration of health, privacy, and property laws in Canada.


Grief can be an overwhelming emotion that involves complex feelings of psychological distress, sadness, or pain that can be difficult to navigate. To address this emotional experience, a controversial application of artificial intelligence (AI) has recently emerged in the form of griefbots. These AI programs are designed to mimic deceased individuals by using their digital footprint, such as data around their speech pattern, sense of humour, and preferences, ultimately allowing individuals to remain digitally immortal. Griefbots are intended to help users with their grieving process after the loss of a loved one. However, the abilities of griefbots to simulate new interactions between users and the digital version of the deceased as well as perpetuate the identity of the deceased raise significant concerns about data collection and its implications to human dignity. 

This article explores the nascent ethical and legal questions posed by griefbots, and aims to open a discussion on the future of human dignity in an AI-enabled society.
 

What are griefbots?

Like companionbots, griefbots provide users emotional connection and comfort during times of loneliness or bereavement. Using large language models (LLMs) trained on the data of the deceased, griefbots often come in the form of chatbots that are able to generate conversations from a first-person perspective, replicating the speech patterns and personality of the deceased. Project December is an example of a griefbot provider to which users can pay $10 to replicate a deceased person through a chatbot. More advanced versions of griefbots allow users to interact with video replicas of the deceased. Like their chatbot counterparts, video-generators are trained on audiovisual content of the deceased. Companies like re;memory create avatars of the deceased using photos and a 10-second audio or video clip.  

The ways in which griefbots operate vary. While some griefbot providers offer their services to individuals who willingly want to leave a virtual replica of themselves behind, others are targeted towards grieving individuals who want to interact with the deceased. In this article, we’ll focus on the  latter case in which personal information, such as a user-provided summary of the deceased and personal text exchanges, are inputted into an AI system to simulate interactions with the digital replica of the deceased. 


Griefbots: Ethical and legal challenges

Besides the undeniable eeriness of AI-enabled immortality, griefbots present a plethora of ethical and legal challenges. Companies in the digital afterlife industry often disregard the dignity of the deceased and target grieving individuals to sell their products and services. Beyond the lack of information on the impacts of griefbots on users’ mental health, especially in cases where users are young, the use of a deceased's data by another individual to create a griefbot poses concerns about privacy rights and human dignity. 

This highlights the broader question of what happens to your digital data post-mortem. Who owns and has control over your digital footprint? And how can your digital footprint be used or manipulated past your lifetime? The lack of clear prohibitions against users inputting others’ personal data—ranging from personal messages to images and videos from the deceased's social media accounts—to create griefbots highlights existing privacy concerns. 

Moreover, there is a potential for personal information to be misused through griefbots. Users can, for example, make a bot do or say things to which the person it’s based on would not consent to. This goes against internationally accepted principles of protecting the dignity of the deceased and respecting the dead. For instance, griefbots can be exploited to fulfill fantasies or used in ways that degrade the deceased. When personal information is misused, it violates the dignity of an individual, as personal information “plays a constitutive role of who [you are].” These challenges call for a human dignity enabled approach to regulating griefbots and other AI technologies that replicate people through data collection.


What is human dignity? 

Human dignity is a broad concept that involves the idea that irrespective of race, gender, class, religion, and other factors, people deserve to be respected because of their humanity. This is recognized in international human rights law and in article 1 of the Universal Declaration of Human Rights. Although not formally written in the Canadian Constitution, dignity is an underlying value of Canada’s Charter of Rights and Freedoms and has been used to interpret Charter rights. In landmark cases concerning the section 7 right to life, liberty, and security, the Supreme Court of Canada interprets dignity as the right to make decisions and choose how you want to express yourself free from state interference. Dignity is also used in the interpretation of the section 15 right to equality, in that there is a need for protection of self-respect and self-worth

Current regulations governing AI do not adequately address challenges related to the preservation of human dignity and privacy, as well as the potential misuse of the data of the deceased.

So, how can we adopt a human dignity centered approach to data and privacy protection, especially in the realm of griefbots?  


How does Canadian law regulate griefbots? 

Current privacy laws in Canada were not written with AI in mind, and do not adequately address the ethical and legal challenges presented by the rapid advancement of AI and new technologies. Griefbots illustrate this particular gap in the Canadian legal structure and highlight the need for a human dignity approach to be adopted in AI regulation.  

The concept of dignity in relation to death is not new. When we think about the types of decisions individuals are legally entitled to make at the end of life, there is a widely-accepted understanding that we should honour individual choice. In health law, individuals who can foresee their own death can make decisions regarding how they want to die. Medical assistance in dying (MAID) and do not resuscitate (DNR) orders allow patients to die with dignity, respecting autonomy and choice in the process of death. In property law, wills and estate planning empower individuals to make choices about how to allocate their assets prior to their death. In both areas of law, licensed professionals are required to follow ethical guidelines in executing end-of-life orders or the transfer of property. We can clearly see how existing legal frameworks have a high regard for the decisions of the deceased. 

Furthermore, in regulating the data of the deceased, federal and provincial laws serve as a patchwork of legal mechanisms to protect privacy rights. In regard to information held by a corporate entity, the Personal Information Protection and Electronic Documents Act (PIPEDA) prohibits organizations from disclosing an individual’s personal information without their knowledge or consent for 20 years after their death. 

This prohibition is still in place in Bill C-27, which if passed will repeal parts of the PIPEDA and replace it with the Consumer Privacy Protection Act (CPPA). However, both the PIPEDA and the CPPA specify certain situations in which valid consent for the collection, use, or disclosure of personal information is not required. For instance, under the CPPA, exceptions can be made for business activities related to providing a product or service the individual has requested. Griefbots present a grey area in this regard as they function on the data of the deceased rather than the user themselves. Nevertheless, under current privacy laws and the proposed CCPA, there is no clear prohibition on using the data of the deceased. 

Another legal avenue is the prohibition against appropriation of personality, which in Ontario is recognized as a property right. This protects an individual from wrongful exploitation of their name and likeness for commercial gain. The common law does not explicitly exclude the right to claim misappropriation of personality for deceased individuals, as seen in cases where legal action is brought forward by the estate of a celebrity. However, the exact scope of preventing appropriation of personality is unclear. Nevertheless,  preventing appropriation of personality does not apply to the personal use of the deceased’s data and prioritizes protecting the economic gains associated with an individual's identity rather than their dignity. 


A human dignity enabled approach to protecting  the data of the deceased

Promoting human dignity through regulation in an AI-enabled world involves protecting the data of both the living and the dead.

Promoting human dignity through regulation in an AI-enabled world involves protecting the data of both the living and the dead. Dignity as an underlying value of the Canadian Charter highlights the importance of honouring the wishes of the deceased and protecting their data from being used in exploitative ways, especially in the context of technologies like greifbots. This requires giving individuals a choice on how and by whom they want their data to be used post-mortem and expanding privacy rights to include protection of the deceased. 

From a privacy law perspective, this could mean expanding laws to explicitly include the protection of the deceased. Post-mortem privacy includes the “the right of a person to preserve and control what becomes of his or her reputation, dignity, integrity, secrets or memory after death.” This approach recognizes that, in the digital age, privacy extends beyond life, ensuring that personal information is not used or exploited without consent. Expanding privacy rights to include post-mortem protections prioritizes human dignity over economic interests, addressing a critical gap in current legal frameworks. 

A human dignity enabled approach also includes giving individuals agency over how and by whom their data is used. Akin to how physical assets are included in wills and estate planning, the data of the deceased can also be protected through property law. Individuals can leave digital estates behind to assign their digital assets to their estate or family members. This approach requires careful consideration of what constitutes a ‘digital asset’—does it include the entirety of your digital footprint or only personal information stored online? 

Griefbots are just one of the many ways that personal data can be collected and used post-mortem. With an increase in AI technologies that rely on user data, data protection has never been more critical. This calls for policymakers and all stakeholders to urgently consider ways in which we can integrate human dignity into AI regulation, to ensure that the rights of both the living and the dead are protected in our digital age. 


Want to learn more?


About the author

Ella Lim is a JD candidate at the University of Ottawa and a research assistant at the Schwartz Reisman Institute for Technology and Society. She holds a BA from the University of Toronto, where she majored in industrial relations and human resources, as well as the Ethics, Society & Law program. Lim’s interests lie in data governance and privacy as well as the intersection of AI and human rights. She is passionate about exploring how emerging technologies impact society and the legal frameworks that govern them.


Browse stories by tag:

Related Posts

 
Previous
Previous

What might the Canadian AI Safety Institute look like? Reflections on an emerging national AI safety regime

Next
Next

The smart way to run smart cities: New report explores data governance and trusted data sharing in Toronto