🤖 AI Summary
Cross-platform violent content detection is hindered by the scarcity of high-quality, fine-grained annotated datasets—particularly those covering subtypes such as political and sexual violence across multiple platforms.
Method: We construct the first large-scale, manually annotated cross-platform violent threat dataset comprising 30,000 instances from Weibo, Twitter, and Reddit, supporting both binary classification and fine-grained multi-subtype recognition. We conduct supervised learning and cross-platform transfer evaluation to assess representational consistency.
Contribution/Results: Empirical results demonstrate strong cross-platform consistency in violent content representations: models trained on a single platform achieve high accuracy when tested on others, and performance further improves when training on merged multi-source data. This challenges the “platform-isolated modeling” assumption and validates the semantic transferability of violent content representations. Our dataset and findings provide a critical empirical foundation and methodological validation for robust, generalizable cross-platform content safety governance.
📝 Abstract
Violent threats remain a significant problem across social media platforms. Useful, high-quality data facilitates research into the understanding and detection of malicious content, including violence. In this paper, we introduce a cross-platform dataset of 30,000 posts hand-coded for violent threats and sub-types of violence, including political and sexual violence. To evaluate the signal present in this dataset, we perform a machine learning analysis with an existing dataset of violent comments from YouTube. We find that, despite originating from different platforms and using different coding criteria, we achieve high classification accuracy both by training on one dataset and testing on the other, and in a merged dataset condition. These results have implications for content-classification strategies and for understanding violent content across social media.