ABSTRACT
Artificial intelligence (AI) is increasingly integrated into medical decision-making, yet its liability implications remain complex, particularly when physicians differ in diagnostic skills and their quality is unobservable. This paper develops a principal-agent model in which a social planner designs medical liability to regulate a physician with private quality information who chooses between a standard treatment, a personalized judgment-based treatment, or following an imperfect AI recommendation. Our analysis yields several novel insights. First, we show that the optimal mechanism under asymmetric information is surprisingly simple: a uniform, one-size-fits-all liability level for all physician types who deviate from the standard of care. Despite physician heterogeneity, this simple policy often achieves the full-information first-best outcome, particularly when standard care is reliable or AI is highly accurate. Second, the relationship between AI accuracy and optimal liability is non-monotonic. Contrary to common intuition, better AI does not always imply more relaxed liability. As AI accuracy increases, the optimal liability either decreases monotonically or follows an inverted-U pattern, depending on the uncertainty of the standard treatment. Third, asymmetric information does not universally reduce social welfare. Welfare loss arises only when standard care is unreliable and AI accuracy is too low; even then, its magnitude follows an inverted U-shape, initially increasing as AI complicates the regulatory problem, but declining as more accurate AI helps mitigate it. Finally, we find that information asymmetry is a double-edged sword in the presence of AI, and greater transparency does not benefit all stakeholders equally.
Mao, Rui and Huang, Tingliang and Shen, Houcai, Optimal Liability Design for Medical AI (January 28, 2026).
Leave a Reply