🤖 AI Summary
Multimodal reward models (MM-RMs) suffer from poor out-of-distribution (OOD) generalization due to overreliance on spurious text-only correlations—termed “textual shortcuts”—in training data. This work identifies the root cause of this failure mode and proposes the first shortcut-aware learning framework for MM-RMs. Our approach jointly employs sample-level dynamic importance reweighting, multimodal consistency regularization, and counterfactual data augmentation to systematically detect and suppress unimodal biases, thereby shifting model reliance from textual shortcuts toward genuine multimodal alignment. Evaluated on six diverse OOD benchmarks, our method achieves an average +12.7% improvement in generalization accuracy and a +9.3% increase in downstream RLHF win rates. Moreover, the framework scales efficiently to datasets with millions of samples, enabling robust, large-scale multimodal reward modeling.
📝 Abstract
Multimodal Reward Models (MM-RMs) are crucial for aligning Large Language Models (LLMs) with human preferences, particularly as LLMs increasingly interact with multimodal data. However, we find that MM-RMs trained on existing datasets often struggle to generalize to out-of-distribution data due to their reliance on unimodal spurious correlations, primarily text-only shortcuts within the training distribution, which prevents them from leveraging true multimodal reward functions. To address this, we introduce a Shortcut-aware MM-RM learning algorithm that mitigates this issue by dynamically reweighting training samples, shifting the distribution toward better multimodal understanding, and reducing dependence on unimodal spurious correlations. Our experiments demonstrate significant improvements in generalization, downstream task performance, and scalability, establishing a more robust framework for multimodal reward modeling.