Wachter, Mittelstadt and Russell, ‘Do large language models have a legal duty to tell the truth?’

Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and the development of shared social truths in democratic societies. LLMs produce responses that are plausible, helpful, and confident but that contain factual inaccuracies, inaccurate summaries, misleading references, and biased information. These subtle mistruths are poised to cause a severe cumulative degradation and homogenisation of knowledge over time. This article examines the existence and feasibility of a legal duty for LLM providers to create models that ‘tell the truth’. We argue that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. Careless speech is defined and contrasted with the simplified concept of ‘ground truth’ in LLMs and prior discussion of related truth-related risks in LLMs including hallucinations, misinformation, and disinformation. The existence of truth-related obligations in EU law is then assessed, focusing on human rights law and liability frameworks for products and platforms. Current frameworks generally contain relatively limited, sector-specific truth duties. The article concludes by proposing a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose LLMs.

Wachter, Sandra and Mittelstadt, Brent and Russell, Chris, Do large language models have a legal duty to tell the truth? (January 31, 2024).

Leave a Reply