🤖 AI Summary
Existing multimodal reinforcement learning (MMRL) agents rely predominantly on sparse outcome rewards, limiting their ability to model fine-grained capabilities such as stepwise reasoning and spatiotemporal localization, while remaining vulnerable to noisy teacher signals and reward hacking. Method: We propose Argos—a framework featuring a hybrid scoring function pool (incorporating rule-based metrics, teacher-model scoring, and supervised fine-tuning–guided filtering) and a dynamically switchable proxy reward mechanism that jointly evaluates reasoning quality, spatiotemporal localization accuracy, and final answer correctness. We provide theoretical guarantees of Pareto optimality for the proposed reward design, effectively mitigating unfounded reasoning and reward deception. Contribution/Results: Leveraging online validation–driven MMRL training, Argos achieves state-of-the-art performance on benchmarks spanning spatial reasoning, visual hallucination detection, and embodied AI, while significantly improving training stability and generalization across diverse tasks.
📝 Abstract
Agentic reasoning models trained with multimodal reinforcement learning (MMRL) have become increasingly capable, yet they are almost universally optimized using sparse, outcome-based rewards computed based on the final answers. Richer rewards computed from the reasoning tokens can improve learning significantly by providing more fine-grained guidance. However, it is challenging to compute more informative rewards in MMRL beyond those based on outcomes since different samples may require different scoring functions and teacher models may provide noisy reward signals too. In this paper, we introduce the Argos (Agentic Reward for Grounded&Objective Scoring), a principled reward agent to train multimodal reasoning models for agentic tasks. For each sample, Argos selects from a pool of teacher-model derived and rule-based scoring functions to simultaneously evaluate: (i) final response accuracy, (ii) spatiotemporal localization of referred entities and actions, and (iii) the quality of the reasoning process. We find that by leveraging our agentic verifier across both SFT data curation and RL training, our model achieves state-of-the-art results across multiple agentic tasks such as spatial reasoning, visual hallucination as well as robotics and embodied AI benchmarks. Critically, we demonstrate that just relying on SFT post-training on highly curated reasoning data is insufficient, as agents invariably collapse to ungrounded solutions during RL without our online verification. We also show that our agentic verifier can help to reduce reward-hacking in MMRL. Finally, we also provide a theoretical justification for the effectiveness of Argos through the concept of pareto-optimality.