🤖 AI Summary
The detection of toxic speech in Mandarin spoken audio suffers from a lack of annotated data and effective multimodal methods. Method: This paper introduces the first large-scale, manually annotated Chinese spoken audio dataset for toxicity detection and fine-grained toxic sentiment classification (e.g., anger, sarcasm, contempt), covering 13 realistic scenarios. It is the first work to systematically distinguish toxicity types from their underlying emotional origins. We propose an end-to-end multimodal framework integrating acoustic features (Whisper + Wav2Vec 2.0), emotion representations (Emotion2Vec), and textual features. Results: Experiments on our held-out test set show that our method achieves over 12% higher F1-score than text-only and unimodal baselines, demonstrating that prosodic cues—such as tone, speech rate, and pauses—are decisive for identifying implicit toxicity in Mandarin speech. This work fills a critical gap in spoken-language toxicity detection research.
📝 Abstract
Despite extensive research on toxic speech detection in text, a critical gap remains in handling spoken Mandarin audio. The lack of annotated datasets that capture the unique prosodic cues and culturally specific expressions in Mandarin leaves spoken toxicity underexplored. To address this, we introduce ToxicTone -- the largest public dataset of its kind -- featuring detailed annotations that distinguish both forms of toxicity (e.g., profanity, bullying) and sources of toxicity (e.g., anger, sarcasm, dismissiveness). Our data, sourced from diverse real-world audio and organized into 13 topical categories, mirrors authentic communication scenarios. We also propose a multimodal detection framework that integrates acoustic, linguistic, and emotional features using state-of-the-art speech and emotion encoders. Extensive experiments show our approach outperforms text-only and baseline models, underscoring the essential role of speech-specific cues in revealing hidden toxic expressions.