Čerkaa, Grigienėa and Sirbikytėb, ‘Liability for damages caused by artificial intelligence’

Abstract:
The emerging discipline of Artificial Intelligence (AI) has changed attitudes towards the intellect, which was long considered to be a feature exclusively belonging to biological beings, ie homo sapiens. In 1956, when the concept of Artificial Intelligence emerged, discussions began about whether the intellect may be more than an inherent feature of a biological being, ie whether it can be artificially created. AI can be defined on the basis of the factor of a thinking human being and in terms of a rational behavior: (i) systems that think and act like a human being; (ii) systems that think and act rationally. These factors demonstrate that AI is different from conventional computer algorithms. These are systems that are able to train themselves (store their personal experience). This unique feature enables AI to act differently in the same situations, depending on the actions previously performed.

The ability to accumulate experience and learn from it, as well as the ability to act independently and make individual decisions, creates preconditions for damage. Factors leading to the occurrence of damage identified in the article confirm that the operation of AI is based on the pursuit of goals. This means that with its actions AI may cause damage for one reason or another; and thus issues of compensation will have to be addressed in accordance with the existing legal provisions. The main issue is that neither national nor international law recognizes AI as a subject of law, which means that AI cannot be held personally liable for the damage it causes. In view of the foregoing, a question naturally arises: who is responsible for the damage caused by the actions of Artificial Intelligence?

In the absence of direct legal regulation of AI, we can apply article 12 of United Nations Convention on the Use of Electronic Communications in International Contracts, which states that a person (whether a natural person or a legal entity) on whose behalf a computer was programmed should ultimately be responsible for any message generated by the machine. Such an interpretation complies with a general rule that the principal of a tool is responsible for the results obtained by the use of that tool since the tool has no independent volition of its own. So the concept of AI-as-Tool arises in the context of AI liability issues, which means that in some cases vicarious and strict liability is applicable for AI actions.

Paulius Čerkaa, Jurgita Grigienėa and Gintarė Sirbikytėb, Liability for damages caused by artificial intelligence. Computer Law and Security Review, available online 22 April 2015.

First posted 2015-04-28 16:56:39

Leave a Reply