top of page

Keeping the Dead Alive: AI Grief Bots and Privacy Concerns 

Updated: Apr 22



Photo by Rogue Lin on Unsplash
Photo by Rogue Lin on Unsplash

Background 

For centuries, people have endured the grieving process of their loved ones and longed for seemingly impossible opportunities to reconnect with the deceased. Now, because of AI advancements, the opportunity to reconnect is possible, for the right price. 


The emergence of AI “grief bots” has allowed us to resurrect the dead regardless of their consent, posing challenges to autonomy and privacy in this life and the next. Grief bots arguably optimize healing by providing support through loss and traumatic life changes. These bots are designed to mimic the mannerisms, tone, and personality of the deceased after digesting any available information that people provide or can be found online, like social media posts, videos, photos, and letters. Upon upload of available materials, grief tech companies like You Only Virtual can create life-like avatars that are realistic imitations of the deceased and capable of holding a conversation. 


Grief tech companies have grown to include commissioning one’s own bot, along with the creation of bots upon the request of loved ones after their passing. Companies like HereAfter AI have created a system where interviews are conducted while the subject is still alive, and their answers are used to generate responses well after their death. Other leading tech companies have dabbled in grief tech in a similar capacity. Amazon introduced Alexa’s new feature that uses a deceased loved one’s voice in its capacity as a virtual assistant. Alexa’s ability to successfully mimic voices only requires an audio recording of the person’s voice. In an interview, Mark Zuckerberg shared his prediction that the future of virtual reality is in the digital afterlife, and his Metaverse platform would address an existing demand to interact with the deceased. Meta’s new technology scans faces to craft three-dimensional virtual models of people supported by AI that users can then interact with in virtual reality.


Ethics of the Digital Afterlife

Even though people can elect to participate in creating their digital replicas, many have not yet done so. As a result, many grief bots are often commissioned by surviving family members who seek assistance with processing grief. Naturally, issues of consent arise as to the creation of bots that uses personal information of the deceased without their explicit consent. If someone dies before consenting to continuing to exist in a digital form, how would that be addressed? How will we prevent misuse of the grief bot by third parties other than the grievant?


Grief tech companies seemingly live in a gray area of the law regarding personal data regulation and as AI advances, the threat to privacy, autonomy, and consent becomes a more significant concern. Grief bots are essentially deepfakes in the sense that a realistic avatar is acting like a person in real time, even though that person is not actually involved. This poses the question of what protections are available to prevent misuse of these tools for theft, manipulation, or mischaracterization of the person. AI bots are no longer limited to providing responses based solely on statements made in the past. As grief bots interact with people, they can grow and say things outside of the scope of the data provided for their creation. If the bot demonstrates a view that does not align with the provided information, it can cause serious effects on the legacy of that person.


Determining and regulating ownership of grief bots also cause concern. When a living person commissions the creation of a digital avatar of the deceased, ownership of the avatar and the data it holds becomes unclear. Would the tech company own the grief bot and its data and reserve the right to rescind someone’s ability to interact with the bot if payment isn't received? 


Additional threats to privacy exist regarding access to personal data shared with grief apps that create these bots. Traditionally, while grieving, some people would visit therapists and the personal data shared in those sessions is clearly protected by law. When personal data is entered into apps that are not connected to official medical providers, the regulations of personal data are not as clear. In this context, personal data can include journal entries, letters, videos, voice recordings, answers to specific interview questions about a person, mood data, financial information, and other concerns that a person has expressed while using grief bots. Data voluntarily shared by users to create grief bots and later with the grief bot are all vulnerable to being obtained illegally through hacking or openly bought and sold by big tech companies. 


In 2023, 23andMe lost data on the DNA of nearly seven million users and the U.S. Federal Trade Commission discovered that BetterHelp and GoodRx were sharing user data with advertising companies. Big tech companies’ inability to protect or refrain from sharing user data is not a new concern. But, the increased amount of intimate personal data collected by grief bots magnifies the severity of misuse. Under the guise of compassion, grief tech companies are building databases of sensitive information that they cannot protect and willingly exploit.


Commercial Exploitation of Grief

Although grief bots have helped some people gain closure with deceased loved ones, grief tech companies are in a position to cause more societal harm than good. Like any business, grief tech companies are driven by profit and subsequently incentivized to discover business models to exploit grief. Concerns arise regarding the potential implementation of predatory business models like subscription plans that allow people to access grief bots at certain price points, which can lead to the development of a dependency on the bot and the inability to accept the loss of a loved one. 


Societal Impact 

As AI continues to advance, grief bots will only become clearer, more convincing, and cheaper to create. Thus, prompting concerns about how the grieving process will change. 


The traditional grieving process only includes a one-way connection from the living to the deceased through reflecting on memories or looking back on old photographs and communications. This process eased people into the next chapter of their lives that had begun without the deceased. 


Grief bots disrupt this natural process since the bot’s ability to provide immediate responses as a realistic version of the deceased creates an illusion that the deceased has not actually departed. Reliance on the grief bots invites avoidance of unpleasant aspects of life, which can result in harm to people of all ages as they become more disconnected from reality and the grieving process is prolonged. Soon, virtual avatars of the deceased powered by AI in the Metaverse will enable avoidance of grief, since users can interact with the avatars as if they were real. This platform will inevitably create an unnaturally accessible way to connect the dead and the living.


Legal Impact


Data Privacy Concerns

The primary issue with grief bots arises with the data privacy and lack of consent of the deceased who are being emulated. Under common law, the right to privacy is a personal right; consequently, privacy protection cannot be extended post-mortem. For example, the family members of a deceased individual cannot bring suit for a privacy violation on behalf of their loved one. 


In many jurisdictions, data protection laws such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA), primarily safeguard the privacy rights of living individuals. While laws surrounding the medical field, such as the Health Insurance Portability and Accountability Act (HIPAA), protect personally identifiable health information for 50 years after death, the same does not apply to data privacy. These privacy regulations do not extend their protections to the data of deceased persons, creating a legal gray area in regards to the handling of posthumous digital information. This void leaves room for the potential misuse and exploitation of such data without the consent of the deceased or their heirs.


The ethical importance of consent is well-understood by the general public, whereby 58% of survey respondents support digital resurrection only if the deceased had explicitly consented, while only 3% support grief bots when consent is absent. 


Furthermore, no clear legal regulation exists governing the handling of individuals’ data after death. Because of this, Terms of Service Agreements between users and data handlers remain as the sole instruction for the posthumous use of data. But these agreements vary greatly. Most remain silent, merely stating that accounts will be deleted after a certain period of inactivity. Some, such as Google, LinkedIn, and PayPal require some form of legal documentation to prove the death of the user before deleting the account and acquiring the account’s content. Facebook, notably, provides the option of “memorializing” one’s account, which keeps the user's content posted and allows friends and loved ones to share memories on the user’s timeline. While some states have passed laws permitting fiduciaries to access and hold authority over digital asses of the deceased account holder for estate purposes, these Terms of Service have been legal hurdles as platforms argue that because the deceased agreed to the Terms of Service, it remains binding even upon their passing. 


The scraping of data from deceased individuals’ digital personas for the purpose of creating griefbots highlights the need for platforms to update their Terms of Service to include provisions that specifically address the posthumous use of user data, ensuring clarity and respect for user autonomy after their passing. 


Intellectual Property Concerns

The right of publicity, an extension of the right to privacy, grants individuals control over the commercial use of their name, image, voice, likeness, or other identifiable aspects of their persona. While no federal regulation exists, many states have addressed the right of publicity. However, in the context of griefbots, this right varies greatly depending on whether the state classifies the right of publicity as a privacy right or a property right. In California, for example, the right of publicity is an intellectual property right, meaning that the use of one's persona is protected for a certain duration after death. On the other hand, where publicity is considered a privacy right, the protection ends at death. Therefore, court holdings typically have followed the legislation of the state in which the deceased individual lived.


Conclusion 

Grief is a natural part of life and being human. Grief bots can help people process grief by providing closure or harm people by disrupting the natural grieving process and whether the harm outweighs the benefit is not entirely clear. But, ethical concerns arising from the threats to privacy, autonomy, and consent are evident and strict regulations of AI tools to navigate grief must be implemented.


*The views expressed in this article do not represent the views of Santa Clara University.


Comentários


bottom of page