🤖 AI Summary
Social media platforms struggle to detect harmful muscle dysmorphia (MD) content disguised as fitness-related material—particularly threatening to adolescent males. To address this, we introduce BigTok, the first multimodal short-video dataset annotated by clinical psychology experts, and propose a domain-adaptive detection framework built upon vision-language models. Our method jointly leverages visual frames, speech-to-text transcripts, and semantic features to enable fine-grained classification across body image, nutrition, and training dimensions. A key contribution is the empirical demonstration that visual modality is critical for identifying covert MD content, thereby establishing a new benchmark for multimodal harmful-content detection. Experiments show that our approach achieves 0.829 accuracy on primary categories and 0.690 on fine-grained subcategories; multimodal fusion improves performance by 5–10% over text-only baselines.
📝 Abstract
Social media platforms increasingly struggle to detect harmful content that promotes muscle dysmorphic behaviors, particularly pro-bigorexia content that disproportionately affects adolescent males. Unlike traditional eating disorder detection focused on the"thin ideal,"pro-bigorexia material masquerades as legitimate fitness content through complex multimodal combinations of visual displays, coded language, and motivational messaging that evade text-based detection systems. We address this challenge by developing BigTokDetect, a clinically-informed detection framework for identifying pro-bigorexia content on TikTok. We introduce BigTok, the first expert-annotated multimodal dataset of over 2,200 TikTok videos labeled by clinical psychologists and psychiatrists across five primary categories spanning body image, nutrition, exercise, supplements, and masculinity. Through a comprehensive evaluation of state-of-the-art vision language models, we achieve 0.829% accuracy on primary category classification and 0.690% on subcategory detection via domain-specific finetuning. Our ablation studies demonstrate that multimodal fusion improves performance by 5-10% over text-only approaches, with video features providing the most discriminative signals. These findings establish new benchmarks for multimodal harmful content detection and provide both the computational tools and methodological framework needed for scalable content moderation in specialized mental health domains.