LLM-based Semantic Augmentation for Harmful Content Detection

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient zero-shot performance of large language models (LLMs) in detecting context-rich harmful social media content—such as propaganda, hateful multimodal memes, and toxic comments. We propose a lightweight, prompt-engineering-based semantic enhancement method: LLMs are employed not for data synthesis, but as semantic enhancers that perform noise cleaning and contextual explanation generation on raw input text; the enhanced outputs serve as preprocessed semantic inputs to downstream supervised classifiers. Crucially, this approach improves classifier performance without increasing training data volume. Evaluated on three benchmarks—SemEval 2024 Persuasive Memes, Jigsaw Toxic Comments, and Facebook Hateful Memes—our method achieves performance comparable to fully human-annotated supervised models while reducing annotation cost by over 90%. This demonstrates the efficacy and practicality of the semantic enhancement paradigm for low-resource harmful content detection.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have demonstrated strong performance on simple text classification tasks, frequently under zero-shot settings. However, their efficacy declines when tackling complex social media challenges such as propaganda detection, hateful meme classification, and toxicity identification. Much of the existing work has focused on using LLMs to generate synthetic training data, overlooking the potential of LLM-based text preprocessing and semantic augmentation. In this paper, we introduce an approach that prompts LLMs to clean noisy text and provide context-rich explanations, thereby enhancing training sets without substantial increases in data volume. We systematically evaluate on the SemEval 2024 multi-label Persuasive Meme dataset and further validate on the Google Jigsaw toxic comments and Facebook hateful memes datasets to assess generalizability. Our results reveal that zero-shot LLM classification underperforms on these high-context tasks compared to supervised models. In contrast, integrating LLM-based semantic augmentation yields performance on par with approaches that rely on human-annotated data, at a fraction of the cost. These findings underscore the importance of strategically incorporating LLMs into machine learning (ML) pipeline for social media classification tasks, offering broad implications for combating harmful content online.
Problem

Research questions and friction points this paper is trying to address.

Enhancing harmful content detection using LLM-based semantic augmentation
Improving performance on complex social media classification tasks
Reducing reliance on human-annotated data for training models
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based text preprocessing for noise cleaning
Semantic augmentation with context-rich explanations
Enhancing training sets without large data increases
🔎 Similar Papers
No similar papers found.