Rizner and Krzus, ‘The AI’s Philosophy of Contract: An Empirical Study of Breach, Remedies, and Model Heterogeneity’

ABSTRACT
Is a contract a moral promise to be kept or an option to perform or pay? While this question has long divided legal theorists between ‘Holmesian’ realists and promissory moralists, it now faces a new ‘legal mind’: the large language model applied to legal analysis. This Article presents a large-scale empirical study of how large language models (LLMs) advise on efficient breach, utilizing 158,388 decisions across five frontier systems. We find that off-the-shelf models exhibit radically different ‘jurisprudences’ of contract. First, we document extreme model heterogeneity: on identical efficient-breach vignettes, baseline breach recommendations span a 96-percentage-point range, from virtually never recommending breach (3.9%) to effectively always recommending it (100%). Second, consistent with behavioral ‘licensing’ effects in humans, the presence of a liquidated damages (LD) clause increases the propensity to advise breach (avg. +8.9 percentage points), reframing the promise as a priced option. Third, role-prompting is unstable: instructing a model to ‘act as an attorney’ shifts advice in opposite directions across model families. We conclude that for firms and courts procuring legal AI, model selection is inevitably a normative choice among competing philosophies of contract law.

Rizner, John and Krzus, Matthew, The AI’s Philosophy of Contract: An Empirical Study of Breach, Remedies, and Model Heterogeneity (November 29, 2025).

Leave a Reply