🤖 AI Summary
Social media harmful content detection faces challenges including scarce labeled data, poor model interpretability, and high computational resource demands. To address these issues under low-resource conditions, this paper proposes SMARTER—a two-stage self-amplified training framework for efficient toxicity identification. First, a large language model (LLM) autonomously generates both correct and incorrect explanations for input instances, followed by preference optimization to jointly enhance classification accuracy and attribution fidelity. Second, a cross-model style–semantic alignment mechanism transfers knowledge from a strong teacher model to a lightweight student model. SMARTER integrates few-shot learning, synthetic explanation generation, preference modeling, and knowledge distillation. Evaluated on HateXplain, Latent Hate, and Implicit Hate benchmarks, it achieves up to a 13.5% improvement in macro-F1 score, significantly reducing reliance on large-scale human annotations while delivering both high performance and intrinsic interpretability.
📝 Abstract
WARNING: This paper contains examples of offensive materials. Toxic content has become pervasive on social media platforms. We introduce SMARTER, a data-efficient two-stage framework for explainable content moderation using Large Language Models (LLMs). In Stage 1, we leverage LLMs' own outputs to generate synthetic explanations for both correct and incorrect labels, enabling alignment via preference optimization with minimal human supervision. In Stage 2, we refine explanation quality through cross-model training, allowing weaker models to align stylistically and semantically with stronger ones. Experiments on three benchmark tasks -- HateXplain, Latent Hate, and Implicit Hate -- demonstrate that SMARTER enables LLMs to achieve up to a 13.5% macro-F1 improvement over standard few-shot baselines while using only a fraction of the full training data. Our framework offers a scalable strategy for low-resource settings by harnessing LLMs' self-improving capabilities for both classification and explanation.