ToVo: Toxicity Taxonomy via Voting

📅 2024-06-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing toxic content detection models suffer from low transparency, poor customizability, and limited reproducibility—stemming primarily from proprietary training data and ambiguous toxicity definitions. To address these challenges, we propose ToxVote, the first voting-driven, interpretable toxicity detection framework. It establishes a consensus-based annotation benchmark via multi-round crowdsourced voting and employs chain-of-thought (CoT) reasoning to generate human-readable attribution explanations. The framework introduces an open-source dataset supporting fine-grained, multi-dimensional toxicity classification. Methodologically, it integrates vote-consistency modeling, interpretability-aware instruction tuning, and scenario-adaptive adaptation. Extensive experiments demonstrate that our model significantly outperforms mainstream closed-source detectors in transparency, customization flexibility, and fine-grained classification accuracy. ToxVote establishes a reproducible, verifiable paradigm for trustworthy content moderation.

Technology Category

Application Category

📝 Abstract
Existing toxic detection models face significant limitations, such as lack of transparency, customization, and reproducibility. These challenges stem from the closed-source nature of their training data and the paucity of explanations for their evaluation mechanism. To address these issues, we propose a dataset creation mechanism that integrates voting and chain-of-thought processes, producing a high-quality open-source dataset for toxic content detection. Our methodology ensures diverse classification metrics for each sample and includes both classification scores and explanatory reasoning for the classifications. We utilize the dataset created through our proposed mechanism to train our model, which is then compared against existing widely-used detectors. Our approach not only enhances transparency and customizability but also facilitates better fine-tuning for specific use cases. This work contributes a robust framework for developing toxic content detection models, emphasizing openness and adaptability, thus paving the way for more effective and user-specific content moderation solutions.
Problem

Research questions and friction points this paper is trying to address.

Toxic Content Detection
Model Transparency
Replicability
Innovation

Methods, ideas, or system contributions that make the work stand out.

ToVo Method
Toxic Content Detection
Transparency and Flexibility
🔎 Similar Papers
No similar papers found.
T
Tinh Son Luong
Oraichain Labs
Thanh-Thien Le
Thanh-Thien Le
AI Researcher, VinAI Research
Natural Language ProcessingMachine LearningContinual Learning
Thang Viet Doan
Thang Viet Doan
Florida International University
L
L. Van
Hanoi University of Science and Technology
T
T. Nguyen
University of Oregon
D
Diep Thi-Ngoc Nguyen
Oraichain Labs