Beatriz Botero Arcila, ‘AI Liability in Europe: The Problem of the Human in the Loop’

ABSTRACT
Who should compensate you if you get hit by a Tesla in ‘autopilot’ mode: the safety driver or the car manufacturer? What about if you find out you were unfairly discriminated against by an AI decision-making tool that was being supervised by an HR professional? Should the developer compensate you, the company that procured the software, or the (employer of the) HR professional that was ‘supervising’ the system’s output? Does anything change if the harm occurs when a doctor uses, or decides not to use, an AI system to assist in a procedure? These situations all involve the liability for harms that are caused by or with an AI system and, as it turns out these questions do not have easy answers. This Article examines how the recently proposed AI liability regime in the EU – a revision of the Product Liability Directive, and an AI Liability effectively compliment the AI Act and how and they address the particularities of AI-human interactions. In doing so, this Article carefully traces the ways in which the EU AI Act’s risk regulation approach frames, interacts, and drifts over to the EU’s liability framework, specifically when dealing with hybrid and human-machine systems.

Botero Arcila, Beatriz, AI Liability in Europe: The Problem of the Human in the Loop (2023).

Leave a Reply