🤖 AI Summary
The role of entropy in multimodal large language models (MLLMs) for visual grounding remains poorly understood, and existing entropy regulation strategies lack interpretability and task adaptability. Method: This paper introduces the Entropy-Controlled Visual Grounding Policy Optimization (ECVGPO) algorithm—a reinforcement learning–based approach featuring a dynamic entropy regularization mechanism that adaptively balances exploration and exploitation. Unlike conventional fixed-entropy or black-box entropy control methods, ECVGPO provides explicit, interpretable, and task-aware entropy modulation. Contribution/Results: Extensive experiments demonstrate that ECVGPO significantly improves both performance and training stability across multiple visual grounding benchmarks and mainstream MLLMs. It also exhibits superior generalization capability, establishing a novel paradigm for perception-decision co-optimization in MLLMs.
📝 Abstract
Recent advances in fine-tuning multimodal large language models (MLLMs) using reinforcement learning have achieved remarkable progress, particularly with the introduction of various entropy control techniques. However, the role and characteristics of entropy in perception-oriented tasks like visual grounding, as well as effective strategies for controlling it, remain largely unexplored. To address this issue, we focus on the visual grounding task and analyze the role and characteristics of entropy in comparison to reasoning tasks. Building on these findings, we introduce ECVGPO (Entropy Control Visual Grounding Policy Optimization), an interpretable algorithm designed for effective entropy regulation. Through entropy control, the trade-off between exploration and exploitation is better balanced. Experiments show that ECVGPO achieves broad improvements across various benchmarks and models.