🤖 AI Summary
Multimodal agents suffer from limited self-correction and generalization capabilities due to the absence of external feedback. Method: This paper introduces MRB—the first unified reward modeling benchmark for multimodal agents—spanning three core dimensions: perception, planning, and safety, across seven realistic scenarios, with fine-grained step-level reward evaluation. MRB employs a tripartite innovation: (i) multidimensional real-world scenarios, (ii) step-level assessment, and (iii) high-quality, difficulty-controllable data. The dataset is constructed via sampling from 10 diverse multimodal large models, followed by rigorous human verification and difficulty calibration. Contribution/Results: Experiments reveal that current state-of-the-art multimodal models perform substantially below expectations on MRB, underscoring the necessity of dedicated reward modeling training and filling a critical gap in systematic evaluation of external feedback mechanisms for multimodal agents.
📝 Abstract
As Multimodal Large Language Models (MLLMs) advance, multimodal agents show promise in real-world tasks like web navigation and embodied intelligence. However, due to limitations in a lack of external feedback, these agents struggle with self-correction and generalization. A promising approach is to use reward models as external feedback, but there is no clear on how to select reward models for agents. Thus, there is an urgent need to build a reward bench targeted at agents. To address these challenges, we propose Agent-RewardBench, a benchmark designed to evaluate reward modeling ability in MLLMs. The benchmark is characterized by three key features: (1) Multiple dimensions and real-world agent scenarios evaluation. It covers perception, planning, and safety with 7 scenarios; (2) Step-level reward evaluation. It allows for the assessment of agent capabilities at the individual steps of a task, providing a more granular view of performance during the planning process; and (3) Appropriately difficulty and high-quality. We carefully sample from 10 diverse models, difficulty control to maintain task challenges, and manual verification to ensure the integrity of the data. Experiments demonstrate that even state-of-the-art multimodal models show limited performance, highlighting the need for specialized training in agent reward modeling. Code is available at github.