Kyle Logue, ‘Enterprise Liability for (Some) AI’

ABSTRACT
AI systems can drive cars, diagnose diseases, manage investment portfolios, and draft legal documents. How should tort law respond when these systems cause harm? This Article argues that the answer depends on the type of harm involved.

For physical harms, including injuries from autonomous vehicles, medical AI, and robotic systems, I argue for enterprise liability: a no-fault regime in which manufacturers and deployers bear the costs of all physical injuries arising out of the use of AI systems, regardless of fault. The case rests on three conditions. First, the AI enterprise is the cheapest deterrable cost avoider. Second, consumers are systematically undeterrable, both because of cognitive biases and because of what I have elsewhere called the first-party insurance externality: consumers’ health and disability insurers do not adjust premiums based on AI-product exposure, leaving consumers with no financial incentive to avoid risky products or to demand safer ones. Third, the residual risk of physical AI harm is idiosyncratic and insurable such that forcing the enterprise to provide coverage in the price of their product is reasonable. When all three conditions hold, enterprise liability can produce better deterrence, fairer risk distribution, and stronger incentives for safety-improving innovation than negligence. I sketch a federal compensation statute modeled on workers’ compensation, with scheduled pecuniary and nonpecuniary benefits, administrative adjudication, and full preemption of state tort claims for covered harms.

For financial and legal AI, I reach a different conclusion. The AI developer or advisor may well be the cheapest cost avoider for negligent advice or execution failures, and existing tort and fiduciary principles, including a heightened duty of care, fiduciary duties of loyalty, and strict liability for execution failures, should apply. But the residual risk of financial advice is market risk: systematic, correlated, and uninsurable. Enterprise liability would make AI advisors insurers of outcomes they cannot control. The conditions for enterprise liability are not met.

The framework is not specific to AI. It identifies when enterprise liability is warranted for any product or service causing physical harm, and when it is not. The question is always what the nature of the residual risk is, who can reduce it, and who can insure it. Even without legislation, the three conditions provide a principled basis for common-law courts to move incrementally toward enterprise liability in particular contexts, and to resist that expansion where the conditions point the other way.

Logue, Kyle D, Enterprise Liability for (Some) AI (March 2, 2026).

Leave a Reply