🤖 AI Summary
Existing MLLM-as-a-Judge approaches are typically optimized for individual tasks, resulting in limited generalization capabilities. This work proposes the first integration of multi-task reinforcement learning into this paradigm, jointly training a multimodal large language model across diverse tasks to simultaneously enhance judgment consistency and alignment with human preferences. The proposed method significantly outperforms strong baselines on both in-distribution and out-of-distribution tasks, achieving state-of-the-art performance across multiple evaluation metrics. Moreover, it demonstrates remarkable generalization ability and produces assessment outcomes that closely align with human judgments.
📝 Abstract
Multimodal Large Language Models (MLLMs) have been widely adopted as MLLM-as-a-Judges due to their strong alignment with human judgment across various visual tasks. However, most existing judge models are optimized for single-task scenarios and struggle to generalize to diverse contexts, which is a critical requirement for reliable evaluation. To address this limitation, we propose Multi-Task Reinforcement Learning for MLLM-as-a-Judge (MT-RL-Judge), a framework that jointly optimizes the judge model across multiple tasks, leveraging the generalization capabilities of RL. Experimental results against several strong baselines demonstrate that MT-RL-Judge outperforms strong baselines in both judgment consistency and correlation with human preferences. Furthermore, our approach exhibits robust generalization on out-of-distribution tasks, further validating its effectiveness.