ABSTRACT
Artificial intelligence (AI) systems are increasingly being integrated into various activities along the continuum of patient care. In oncology, which relies heavily on radiology, AI applications are used for oncologic imaging, ranging from imaging acquisition to cancer screening to treatment planning and response monitoring. This has resulted in AI-assisted radiology playing an increasingly important role in cancer screening, diagnosis staging, response assessment, and prognosis. In clinical settings where the permissible margin of error is slim and where AI-assisted decision-making can significantly affect human lives, the reliability of AI is of utmost importance. However, studies have shown that there is a risk to patient safety when the poorly or incorrectly labeled data of these systems lead to faulty predictions, or its trained algorithms are biased. This has left policymakers with the problem of providing clear liability rules on who is liable for the output produced by an AI system or its failure to produce an output, which could harm the end-users of these technologies, without stifling AI innovation.
The scholarship on how to frame the optimal liability framework for AI-related tort claims for Canada is animated by the inability of traditional liability schemes to address the nuances of how to prove harm or loss that results from the use of AI technology or whom to hold responsible for the harm or loss. There is currently a need for more research and analysis in Canada on how to assign legal responsibility for AI-related claims. Lessons from how AI regulation is developing in Europe suggest that various factors will shape this framework, including law, market forces, norms, and technology. This paper examines the challenges of establishing tort liability for AI-related claims in healthcare, with a comparative focus on Canada and Europe.
The paper begins by providing an overview of the current legal frameworks on AI in both regions, including an analysis of the challenges of applying traditional tort liability principles to AI-related claims. The paper argues that traditional tort liability principles are inadequate for addressing the unique nuances of AI-related claims and that targeted statutory tort liability rules are necessary to provide adequate remedies to the victims, on the one hand, and promote innovation in the field, on the other hand. Through a case study analysis of sensitive use cases of AI in healthcare, such as medical diagnosis, the paper concludes by recommending clearer tort liability rules implemented through risk-based regulation, similar to the European Union’s regulatory framework, to prioritize safety and reliability while preserving the social benefits of AI technology.
Dugeri, Michael, Rethinking Tort Liability Regimes for AI-Related Claims in Canada and Europe: A Case Study of AI Applications in Disease Diagnosis (April 20, 2023).
Leave a Reply