Eugene Volokh, ‘Large Libel Models? Liability for AI Output’, 3 Journal of Free Speech Law, volume 4, 489 (2023). AI in the form of Large Language Models (LLMs) is altering the ways in which we work, learn, and live. Along with their many upsides, an already familiar downside of LLMs is their propensity to ‘hallucinate’ – that is, respond to factual queries with predictions or guesses that are false yet proffered as true. And some of these hallucinations are not merely false but defamatory. For example, if one were to query an AI program: ‘Of which crimes has Professor X of ABC Law School been convicted?’, it might respond with a fabricated list of offenses. When defamatory hallucinations occur, who faces (or should face) liability, and on what terms? In ‘Large Libel Models? Liability for AI Output’ Eugene Volokh lays out with great care a detailed roadmap for answering these questions … (more)
[John CP Goldberg, JOTWELL, 14 November 2023]
Leave a Reply