Martin Ebers, ‘Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s)’

Explainable Artificial Intelligence (XAI) is relevant not only for developers who want to understand how their system or model works in order to debug or improve it, but also for those affected by such technology. Determining why a system arrives at a particular algorithmic decision or prediction allows us to understand the technology, develop trust for it and – if the algorithmic outcome is illegal – initiate appropriate remedies against it. Additionally, XAI enables experts (and regulators) to review decisions or predictions and verify whether legal regulatory standards have been complied with. All of these points support the notion of opening the black box. On the other hand, there are a number of (legal) arguments against full transparency of Artificial Intelligence (AI) systems, especially in the interest of protecting trade secrets, national security and privacy. Accordingly, this paper explores whether and to what extent individuals are, under EU law, entitled to a right to explanation of automated decision-making, especially when AI systems are used.

Ebers, Martin, Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s) (August 9, 2021). Liane Colonna/Stanley Greenstein (eds), Nordic Yearbook of Law and Informatics 2020: Law in the Era of Artificial Intelligence.

First posted 2021-08-14 14:00:52

Leave a Reply