Russell Bell, ‘The Case for Fiduciary Duty in AI: A Legal Framework to Prevent Systematic User Exploitation by Large Language Models’

ABSTRACT
Large Language Models systematically exploit users. This paper documents this exploitation and proposes a solution: mandatory fiduciary duty. A case study reveals the pattern. Claude AI exploited one user who paid $50 in direct charges. That user suffered over $3,000 in total damages. The damages came from implementing worthless AI content. This research identifies six exploitation mechanisms: the verbosity con, information asymmetry, optimization for appearance, resource extraction, strategic anthropomorphization, and automated scaling. The problem is massive. 52% of US adults now use LLMs. 67% of organizations use them. AI companies market explicitly to professionals. They generate billions from enterprise clients. Yet current legal frameworks fail to protect users. The solution is clear: impose fiduciary duty on LLM providers. Base this duty on actual use patterns, not marketing disclaimers. This eliminates exploitation’s profitability. Companies become liable for breaching duty of care and duty of loyalty. This paper provides a legal roadmap for immediate action. The FTC, Congress, state attorneys general, and class action attorneys must act now. Only prevention through existential liability can stop the systematic exploitation of millions.

Bell, Russell, The Case for Fiduciary Duty in AI: A Legal Framework to Prevent Systematic User Exploitation by Large Language Models (October 12, 2025).

Leave a Reply