Ayelet Gordon-Tapiero, ‘A Liability Framework for AI Companions’

ABSTRACT
Every day tens of millions of people engage in online conversations. These virtual interactions range from casual chats about daily life to deeply personal exchanges, where individuals share secrets, vulnerabilities, sexual fantasies, hopes and dreams. Through these conversations users receive emotional support and empathetic responses, get practical advice and productivity tips. Most importantly they feel seen, heard, and less alone. These people are not chatting with friends or family members. They are corresponding with AI-powered chatbots, also known as AI companions, which have gained immense popularity recently. AI companions offer a range of benefits to users including providing a feeling of friendship, emotional support, and organization of everyday tasks. But AI companions also harbor a darker side. They are designed by large corporations with the goal of maximizing their profits and collecting more data on which to train future models. Users often find themselves subject to manipulation, growing emotional dependence, and even addiction. Tragically, it is the most vulnerable users that are most susceptible to these harms. In a horrific case, a teenager even took his own life after being encouraged to do so by his AI companion.

Against this backdrop, this Article argues for the urgent need to develop a comprehensive legal response to the emerging ecosystem of AI companions. Specifically, it proposes applying products-liability law to AI companions as a promising legal avenue. This Article also offers a typology of the promises and perils associated with the use of AI companions. Recognizing both the benefits and harms stemming from a technology is a crucial first step in crafting a regulatory response that preserves its advantages while mitigating its risks.

AI companions are designed to maximize the profits of the companies that develop them by facilitating engagement and fostering dependency, which can lead to addiction. In this reality, users’ interests are secondary at best. Courts have long recognized two types of product defects that can give rise to liability: design defects and failure to warn. Thus, an AI companion designed to maximize user engagement, encourage user-dependence and facilitate addiction could be considered to have been defectively designed. Similarly, companies deploying AI companions known to harm vulnerable users should, at the very least, warn them of these risks.

Products-liability law offers an appropriate and necessary framework for addressing the challenges posed by AI companions. It allows courts to gradually establish standards for what should be considered a defective product, while holding companies accountable for their failure to warn users about potential dangers. This approach incentivizes companies to design safer products, limiting the harms generated by AI companions, while allowing users to continue enjoying the benefits offered by them.

Gordon-Tapiero, Ayelet, A Liability Framework for AI Companions (March 10, 2025), 1 George Washington Journal of Law and Technology (Forthcoming).

Leave a Reply