🤖 AI Summary
This work addresses the weak interpretability and uncontrollable decision-making in hateful meme detection by proposing a novel “explain-then-detect” paradigm and introducing the ExPO-HM framework. ExPO-HM employs conditional decision entropy (CDE) as a differentiable metric of reasoning quality and as the reward signal for reinforcement learning, integrating supervised fine-tuning (SFT) initialization, curriculum learning, and GRPO-based reinforcement optimization to jointly train explanation generation and classification. Evaluated on three mainstream benchmarks, ExPO-HM achieves state-of-the-art (SOTA) performance in both binary detection and fine-grained classification—improving F1 scores by up to 15% and 17% over GRPO/DPO baselines, respectively. The framework significantly enhances model interpretability, reasoning reliability, and operational feasibility for content moderation.
📝 Abstract
Hateful memes have emerged as a particularly challenging form of online abuse, motivating the development of automated detection systems. Most prior approaches rely on direct detection, producing only binary predictions. Such models fail to provide the context and explanations that real-world moderation requires. Recent Explain-then-Detect approaches, using Chain-of-Thought prompting or LMM agents, perform worse than simple SFT baselines, and even advanced post-training methods such as GRPO fail to close the gap. Our analysis identifies two key issues of such systems: important policy-relevant cues such as targets and attack types are not hypothesized by the model as a likely explanation; and the binary reward signal is insufficient to guide reasoning. To address these challenges, we propose ExPO-HM (Explain-then-Detect Policy Optimization for Hateful Memes), inspired by the training and evaluation process of human annotators. ExPO-HM combines SFT warmup, GRPO with curriculum learning, and Conditional Decision Entropy (CDE) as both metric and reward for reasoning quality. Across three hateful meme benchmarks, ExPO-HM achieves state-of-the-art performance on binary detection, fine-grained classification, and reasoning quality, with up to 15% and 17% F1 improvement over the GRPO and DPO baselines, respectively. By moving hateful meme detection from simple binary alarms to explanation-driven detection, ExPO-HM provides accurate, interpretable, and actionable moderation support.