🤖 AI Summary
Multimodal Large Reasoning Models (MLRMs) suffer from a mismatch wherein they over-reason on simple tasks yet under-explore complex ones. To address this, we propose ARES: a task-difficulty-aware adaptive reasoning framework for multimodal reasoning. Its core innovation is token-level difficulty-aware entropy shaping—introducing, for the first time, a sliding-window-based high-entropy token identification mechanism to locate critical reasoning steps, coupled with a two-stage training paradigm that synergizes cold-start initialization and dynamic exploration. ARES further integrates windowed entropy computation, Adaptive Entropy Policy Optimization (AEPO), hierarchical entropy rewards, and dynamic KL divergence constraints to precisely balance reasoning depth and breadth. Evaluated on mathematical, logical, and multimodal benchmarks, ARES achieves performance on par with or surpassing commercial systems while significantly reducing inference cost—e.g., 23–38% fewer tokens—demonstrating the effectiveness and generalizability of difficulty-driven adaptive reasoning.
📝 Abstract
Recent advances in multimodal large reasoning models (MLRMs) have substantially improved their ability to solve complex textual and visual tasks. However, these models tend to overthink on simple problems, producing unnecessarily lengthy reasoning traces, while under-exploring on challenging ones, leading to missed solutions. To address this imbalance, we propose ARES, a unified open-source framework for adaptive reasoning that dynamically allocates exploration effort based on task difficulty. Our approach is motivated by two key empirical findings: (i) while single-token entropy is noisy, high window-entropy (HWE) tokens (token-level entropies averaged under a sliding window) can reliably capture reasoning-critical moments; and (ii) reducing HWE usage benefits easy problems, while increasing it is essential for solving hard ones. Building on these insights, ARES introduces a two-stage training pipeline. In the Adaptive Cold-Start stage, we curate multimodal and textual data paired with reasoning traces of length proportional to problem difficulty, equipping the model with initial difficulty awareness. In the second stage, we develop Adaptive Entropy Policy Optimization (AEPO), which uses HWE tokens as exploration triggers to decide when to explore, and a hierarchical entropy reward with dynamic KL control to decide how much to explore. Extensive experiments demonstrate that ARES achieves superior performance and reasoning efficiency across diverse mathematical, logical, and multimodal benchmarks, while closing the gap to leading commercial systems under significantly lower inference costs.