ABSTRACT
Artificial intelligence (AI) systems are now embedded in corporate decision-making, carrying legal, financial, safety, and rights-affecting consequences. While AI is widely recognized as a material source of corporate risk, a critical gap persists in the governance literature: although boards of directors’ face expanding fiduciary liability for AI-related harms, existing scholarship has failed to specify the concrete governance architecture required to discharge this duty. Consequently, boards operate under escalating legal exposure without a clear, legally defensible blueprint for accountability. Addressing this gap, the paper draws on settled fiduciary doctrine, comparative governance norms, and emerging disclosure practices to argue that AI governance is a non-delegable board-level obligation rather than a discretionary advisory function. Its central contribution is the derivation of a mandatory, tiered accountability architecture that translates abstract fiduciary liability into enforceable corporate control. The architecture comprises: (1) a standing Board AI Risk Committee vested with sovereign oversight authority; (2) a named AI Fiduciary Officer with independent escalation and intervention powers; and (3) a tiered risk-approval and documentation regime requiring explicit board authorization for high-risk AI deployments. Rather than proposing new ethical principles, technical safeguards, or anticipatory regulation, this paper clarifies the minimum institutional structures boards must establish to satisfy existing duties of care and oversight. By converting fiduciary exposure into a concrete governance mandate, it provides a legally cognizable, jurisdiction-agnostic standard for evaluating AI oversight – thereby reframing the inquiry from aspirational compliance to demonstrable institutional control.
Abdul Majid Iftikhar Mahmud, From Liability Exposure to Accountable Control – A Mandatory Fiduciary Architecture for Board-Level AI Governance (December 28, 2025).
Leave a Reply