Beyond the Explicit: A Bilingual Dataset for Dehumanization Detection in Social Media

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior research on digital dehumanization primarily focuses on detecting explicit negative rhetoric, overlooking more subtle and pervasive implicit forms—those lacking overt hostility yet perpetuating bias against marginalized groups, posing greater detection challenges. Method: We introduce the first theory-driven, bilingual (English–French) dataset for implicit dehumanization (16,000 annotated instances), featuring systematic, multi-dimensional annotations at both document- and span-levels (e.g., animalization, mechanization, individuation loss). Annotation combines expert guidance with crowd-sourcing, multi-source sampling, and ML-assisted refinement. Evaluation employs zero-shot and few-shot settings. Contribution/Results: Our approach significantly outperforms state-of-the-art methods under low-resource conditions. This work establishes the first benchmark for implicit dehumanization detection, enabling reproducible cross-lingual evaluation and advancing bias-aware NLP in resource-constrained settings.

Technology Category

Application Category

📝 Abstract
Digital dehumanization, although a critical issue, remains largely overlooked within the field of computational linguistics and Natural Language Processing. The prevailing approach in current research concentrating primarily on a single aspect of dehumanization that identifies overtly negative statements as its core marker. This focus, while crucial for understanding harmful online communications, inadequately addresses the broader spectrum of dehumanization. Specifically, it overlooks the subtler forms of dehumanization that, despite not being overtly offensive, still perpetuate harmful biases against marginalized groups in online interactions. These subtler forms can insidiously reinforce negative stereotypes and biases without explicit offensiveness, making them harder to detect yet equally damaging. Recognizing this gap, we use different sampling methods to collect a theory-informed bilingual dataset from Twitter and Reddit. Using crowdworkers and experts to annotate 16,000 instances on a document- and span-level, we show that our dataset covers the different dimensions of dehumanization. This dataset serves as both a training resource for machine learning models and a benchmark for evaluating future dehumanization detection techniques. To demonstrate its effectiveness, we fine-tune ML models on this dataset, achieving performance that surpasses state-of-the-art models in zero and few-shot in-context settings.
Problem

Research questions and friction points this paper is trying to address.

Detecting subtle dehumanization forms in social media content
Addressing overlooked bilingual dehumanization in computational linguistics
Identifying harmful biases against marginalized groups online
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bilingual dataset for dehumanization detection
Document- and span-level annotation of instances
Fine-tuned ML models outperform state-of-the-art
🔎 Similar Papers
No similar papers found.