ABSTRACT
When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals – making them broadly liable for negligent injuries occurring within the hospital system – with an important caveat: hospitals must have access to the information needed for adaptation and monitoring. If that information is unavailable, we suggest that liability should shift from hospitals to the developers keeping information secret.
WN Price II and IG Cohen, Locating Liability for Medical AI, 73 DePaul Law Review 339 (2024).
Leave a Reply