ABSTRACT
While promising considerable benefits, recent developments in artificial intelligence (AI) add to the challenges faced by modern societies, their citizens, and their legal frameworks. Often governed by obscure algorithms, AI raises multiple legal, political, and ethical issues, from the effectiveness of law in the face of autonomous systems, to the development of frameworks to ensure respect for fundamental rights and information security. The platform economy and the delegation of certain governmental functions to private algorithms and platforms are also causing a shift of power to new structures beyond the control of existing governance and accountability frameworks and brings concerns of these new ‘governors’ and private order makers for democracy and democratic freedoms to the forefront.
As AI systems are becoming widely available, concerns have arisen about safety and liability for faulty AI, and the use of AI to harm others. Similarly, of concern is the lack of transparency, oversight and accountability in how AI processes our personal information, and how those systems are deployed by the administrative state and courts alike. In the face of the risks for human rights, some jurisdictions have adopted or proposed complex legal frameworks for autonomous systems or have simply banned some AI applications with a high impact on human rights. The current legal frameworks appear inadequate, but how can we ensure appropriate oversight and enforcement?
AI has relaunched decades-old questions about the regulation of technology. Technology, the Internet and AI have brought numerous opportunities, and regulation has not kept pace with its increasing role in our lives. While some academics have begun to explore the applicability of our current legal frameworks to AI, others have begun to research the question of whether entirely new regulatory frameworks must be created for AI systems. Some propose instead to rely on technical standards and ethical principles as the law cannot keep up with technological developments. Others, as we will see, argue AI systems are so autonomous that they should be granted legal personhood. However, while we have powerful algorithms today to carry this out, behind those algorithms are humans who create them. The algorithms are not left to make all of the decisions by themselves. Therefore, as a society, we must refrain from oversimplifying our thinking about science fiction and tackle the current issues surrounding technology.
The legislative effervescence caused by information technology, among other things, ought to be criticized as should the defeated discourses against regulation. We should stop believing that any technological or social change must necessarily lead to a change in the law. Legal frameworks should be designed and maintained so as to evolve with society, instead of requiring that legislators rethink everything, and develop new legal approaches to each technological ‘novelty’. Admittedly, regulating the use of technology generates an abundance of new issues. Certainly, there is no need for a ‘law of the horses’, but technologies such as AI require more nuance. A number of these issues are resolved through existing law, yet others will require legislative intervention. Very often, this intervention is necessary when rules of law are drawn up too restrictively with regard to the exclusive use of a particular medium; namely, paper.
In the Canadian ‘quest’ for regulation, Section I discusses how, through functional equivalence, existing principle-based laws might apply in four core legal pillars affected by AI: torts, contracts, privacy and administrative law. It also highlights some new situations and paradigm shifts raised by AI and the digital contexts that require legislative intervention. Canada is beginning to develop new frameworks for AI systems, and the path toward a framework offering appropriate safeguards and algorithmic accountability is long. It is argued that through nimble intervention, all levels of government could adapt their framework to the new technological context. It could be done without the need for new technology – or sector-specific frameworks – using a principle-based framework for autonomous systems that provides proper liability schemes and transparency requirements. Section II discusses the many challenges along the way, from the call for legal personhood for AI, to Canada’s historic regulatory hesitancy and its complex constitutional federalism.
Martin-Bariteau, Florian, Regulating Autonomous Systems in Canada (January 30, 2023).
Leave a Reply