🤖 AI Summary
Existing toxic content detection models suffer from low transparency, poor customizability, and limited reproducibility—stemming primarily from proprietary training data and ambiguous toxicity definitions. To address these challenges, we propose ToxVote, the first voting-driven, interpretable toxicity detection framework. It establishes a consensus-based annotation benchmark via multi-round crowdsourced voting and employs chain-of-thought (CoT) reasoning to generate human-readable attribution explanations. The framework introduces an open-source dataset supporting fine-grained, multi-dimensional toxicity classification. Methodologically, it integrates vote-consistency modeling, interpretability-aware instruction tuning, and scenario-adaptive adaptation. Extensive experiments demonstrate that our model significantly outperforms mainstream closed-source detectors in transparency, customization flexibility, and fine-grained classification accuracy. ToxVote establishes a reproducible, verifiable paradigm for trustworthy content moderation.
📝 Abstract
Existing toxic detection models face significant limitations, such as lack of transparency, customization, and reproducibility. These challenges stem from the closed-source nature of their training data and the paucity of explanations for their evaluation mechanism. To address these issues, we propose a dataset creation mechanism that integrates voting and chain-of-thought processes, producing a high-quality open-source dataset for toxic content detection. Our methodology ensures diverse classification metrics for each sample and includes both classification scores and explanatory reasoning for the classifications. We utilize the dataset created through our proposed mechanism to train our model, which is then compared against existing widely-used detectors. Our approach not only enhances transparency and customizability but also facilitates better fine-tuning for specific use cases. This work contributes a robust framework for developing toxic content detection models, emphasizing openness and adaptability, thus paving the way for more effective and user-specific content moderation solutions.