🤖 AI Summary
This paper critically examines the socio-technical risks arising from the shift of toxicity detection algorithms from passive content moderation to proactive interventions—such as real-time keyboard interception. Through participatory workshops with diverse stakeholders and context-sensitive modeling, augmented by socio-technical analysis, we identify three core risks: inequitable distribution of intervention benefits, malicious circumvention and gamification of hate speech, and adversarial contamination of models. We introduce, for the first time, the analytical frameworks of “contextual insensitivity” and “interventional justice” to expose how algorithmic interventions implicitly harm marginalized groups in complex sociolinguistic contexts. Our contribution comprises an actionable ethical review checklist and design constraint principles, grounded in empirical findings and normative reasoning. These resources provide both theoretical foundations and practical guidance for the responsible deployment of AI in proactive content governance. (149 words)
📝 Abstract
Toxicity detection algorithms, originally designed with reactive content moderation in mind, are increasingly being deployed into proactive end-user interventions to moderate content. Through a socio-technical lens and focusing on contexts in which they are applied, we explore the use of these algorithms in proactive moderation systems. Placing a toxicity detection algorithm in an imagined virtual mobile keyboard, we critically explore how such algorithms could be used to proactively reduce the sending of toxic content. We present findings from design workshops conducted with four distinct stakeholder groups and find concerns around how contextual complexities may exasperate inequalities around content moderation processes. Whilst only specific user groups are likely to directly benefit from these interventions, we highlight the potential for other groups to misuse them to circumvent detection, validate and gamify hate, and manipulate algorithmic models to exasperate harm.