ABSTRACT
As artificial intelligence (AI) systems gain autonomy, they increasingly operate alongside humans in high-stakes domains such as transportation, healthcare, and military operations. This raises critical questions about how people assign blame when both human and machine agents independently contribute to harmful outcomes. While prior research has examined blame attribution in single-agent scenarios or in supervised human-machine interactions, little is known about how blame is distributed when both agents act autonomously. Across three studies (N = 628), we investigated how people assign blame and causal responsibility in autonomous human-machine interactions. Participants evaluated vignettes involving norm-violating and norm-conforming agents across domains including automated driving, medicine, and military decision-making. In Studies 1 and 2, we found that norm-violating agents were blamed more than norm-conforming agents, but this effect did not differ between human and machine agents. Study 3 extended these findings, demonstrating a compensatory pattern in which blame toward one agent decreased blame toward the other, regardless of whether the agent was human or machine. Across all studies, blame judgments tracked norm violations rather than agent identity. These findings suggest that when machines are perceived as autonomous actors, they are held to the same moral standards as humans. Our results have implications for theories of moral agency and for the legal and regulatory frameworks governing AI accountability.
Meier, Jeremy and Wylie, Richard and Laham, Simon M, Who’s to Blame? Blame Attribution in Autonomous Human-Machine Interactions (June 16, 2025).
Leave a Reply