Dan Burk, ‘Racial Bias in Algorithmic IP’

Machine learning systems, a form of artificial intelligence (AI), are increasingly being deployed both for the creation of innovative works and the administration of intellectual property (IP) rights associated with those works. At the same time, evidence of racial bias in IP systems is manifest and growing. Legal scholars have already noted that as AI becomes part of the intellectual property landscape, the biases present in existing IP systems may infect algorithmic processes trained on data from past practices. Unfortunately, much of the discussion to date conflates technical biases in AI systems with social biases, requiring disambiguation of the two. The latter type of bias, social bias, is already endemic throughout IP, and so the addition of AI systems requires special consideration only to the extent that they present special problems.

In this essay I begin identifying such social bias problems that are particular to algorithmic determinations through AI processing. One set of problems relates to the illusion of numerical objectivity; AI outputs tend to be assigned undue weight due to the universal but fallacious impression that they are objective and neutral. A second set of problems relates to the performative nature of algorithmic processes; they tend to produce the effects that they assume, and in the intellectual property context hold the potential for altering the nature of protected works. Identifying these problems indicates that currently proposed solutions will be inadequate, and points toward a different approach to dealing with racial bias in algorithmic IP.

Burk, Dan L, Racial Bias in Algorithmic IP (August 15, 2021). 106 Minnesota Law Review Headnotes (forthcoming 2022).

First posted 2021-08-20 11:00:21

Leave a Reply