ABSTRACT
This article examines the crisis of legal liability in the age of generative AI. It argues that the problem lies not in artificial intelligence as such, but in a systematic category error in its legal interpretation. Building on the doctrine of AI as instrumentum vocale, the article introduces the concept of simulated reasoning: the production of reasoning-like structures without a reasoning subject. It shows that classical theories of legal responsibility, across both Anglo-American and continental traditions, presuppose a thinking subject. Once legal practice begins to rely on noncognitive systems that simulate reasoning, this assumption becomes unstable. The article analyzes the resulting procedural and evidentiary consequences, including lawyers’ disclosure duties, the asymmetry of judicial AI use, and the limits of formal verification in notarial systems. It concludes that the crisis of legal liability is not technological, but ontological.
Kildeev, Adel, Simulated Reasoning and the Crisis of Legal Liability (April 4, 2026).
Leave a Reply