🤖 AI Summary
This work addresses the limitations of multimodal content safety moderation, which suffers from sparse data and supervision signals, and where conventional binary labels often lead models to rely on superficial shortcuts rather than capturing fine-grained safety semantics. To overcome this, the authors propose UniMod, a novel paradigm that reframes moderation as a multi-attribute trajectory reasoning process encompassing evidence localization, modality-wise evaluation, risk mapping, policy decision, and response generation, thereby enabling dense safety semantic modeling. Key innovations include the first multi-attribute trajectory reasoning framework, the UniRM multi-head scalar reward model providing attribute-level supervision, task-specific parameter decoupling, and a dynamic rebalancing training strategy. Experiments show that UniMod matches or exceeds state-of-the-art performance—using only 40% of the training data required by leading baselines—while ablation studies confirm the efficacy of trajectory reasoning.
📝 Abstract
Safety moderation is pivotal for identifying harmful content. Despite the success of textual safety moderation, its multimodal counterparts remain hindered by a dual sparsity of data and supervision. Conventional reliance on binary labels lead to shortcut learning, which obscures the intrinsic classification boundaries necessary for effective multimodal discrimination. Hence, we propose a novel learning paradigm (UniMod) that transitions from sparse decision-making to dense reasoning traces. By constructing structured trajectories encompassing evidence grounding, modality assessment, risk mapping, policy decision, and response generation, we reformulate monolithic decision tasks into a multi-dimensional boundary learning process. This approach forces the model to ground its decision in explicit safety semantics, preventing the model from converging on superficial shortcuts. To facilitate this paradigm, we develop a multi-head scalar reward model (UniRM). UniRM provides multi-dimensional supervision by assigning attribute-level scores to the response generation stage. Furthermore, we introduce specialized optimization strategies to decouple task-specific parameters and rebalance training dynamics, effectively resolving interference between diverse objectives in multi-task learning. Empirical results show UniMod achieves competitive textual moderation performance and sets a new multimodal benchmark using less than 40\% of the training data used by leading baselines. Ablations further validate our multi-attribute trajectory reasoning, offering an effective and efficient framework for multimodal moderation. Supplementary materials are available at \href{https://trustworthylab.github.io/UniMod/}{project website}.