🤖 AI Summary
This paper addresses the foundational debate over whether algorithms can intrinsically embody bias. Drawing on philosophical analysis, critical techno-sociology, and the theory of political artifacts, it clarifies the ontological status of algorithms and disambiguates “bias” across statistical, moral, and political dimensions. It advances the novel claim that algorithms are inherently political artifacts capable of harboring *internal moral bias*, rather than merely reflecting exogenous data bias. The study proposes a causal framework—“statistical bias → moral bias”—to explain how technical artifacts acquire normative valence, thereby challenging the myth of algorithmic neutrality. Empirical cases—including the UK A-level grading algorithm, academic search engines, and healthcare and hiring recommendation systems—are integrated into an ontological analysis to ground conceptual foundations for algorithmic accountability, discrimination attribution, and ethical governance. The work shifts algorithmic ethics from technical remediation toward ontological reflection. (149 words)
📝 Abstract
Algorithmic bias has been the subject of much recent controversy. To clarify what is at stake and to make progress resolving the controversy, a better understanding of the concepts involved would be helpful. The discussion here focuses on the disputed claim that algorithms themselves cannot be biased. To clarify this claim we need to know what kind of thing 'algorithms themselves' are, and to disambiguate the several meanings of 'bias' at play. This further involves showing how bias of moral import can result from statistical biases, and drawing connections to previous conceptual work about political artifacts and oppressive things. Data bias has been identified in domains like hiring, policing and medicine. Examples where algorithms themselves have been pinpointed as the locus of bias include recommender systems that influence media consumption, academic search engines that influence citation patterns, and the 2020 UK algorithmically-moderated A-level grades. Recognition that algorithms are a kind of thing that can be biased is key to making decisions about responsibility for harm, and preventing algorithmically mediated discrimination.