ResAdapt: Adaptive Resolution for Efficient Multimodal Reasoning

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational efficiency bottleneck faced by multimodal large language models when processing high-resolution and long-sequence visual inputs, which stems from the excessive scale of pixel-level data. The authors propose ResAdapt, an input-side adaptive framework that formulates visual budget allocation as a contextual bandit problem for the first time. They introduce a Cost-Aware Policy Optimization (CAPO) algorithm, which employs a lightweight allocator to dynamically adjust the visual budget per frame. This approach achieves Pareto-optimal trade-offs between computation and accuracy while preserving the native interface of the backbone model. Experiments demonstrate that, under the same visual budget, ResAdapt supports up to 16× more frames and improves performance by over 15%, substantially advancing toward the efficiency–accuracy frontier in inference-intensive tasks.
📝 Abstract
Multimodal Large Language Models (MLLMs) achieve stronger visual understanding by scaling input fidelity, yet the resulting visual token growth makes jointly sustaining high spatial resolution and long temporal context prohibitive. We argue that the bottleneck lies not in how post-encoding representations are compressed but in the volume of pixels the encoder receives, and address it with ResAdapt, an Input-side adaptation framework that learns how much visual budget each frame should receive before encoding. ResAdapt couples a lightweight Allocator with an unchanged MLLM backbone, so the backbone retains its native visual-token interface while receiving an operator-transformed input. We formulate allocation as a contextual bandit and train the Allocator with Cost-Aware Policy Optimization (CAPO), which converts sparse rollout feedback into a stable accuracy-cost learning signal. Across budget-controlled video QA, temporal grounding, and image reasoning tasks, ResAdapt improves low-budget operating points and often lies on or near the efficiency-accuracy frontier, with the clearest gains on reasoning-intensive benchmarks under aggressive compression. Notably, ResAdapt supports up to 16x more frames at the same visual budget while delivering over 15% performance gain. Code is available at https://github.com/Xnhyacinth/ResAdapt.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
visual token growth
spatial resolution
temporal context
input fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

ResAdapt
adaptive resolution
multimodal reasoning
input-side adaptation
cost-aware policy optimization
🔎 Similar Papers
No similar papers found.
Huanxuan Liao
Huanxuan Liao
Institute of Automation, Chinese Academy of Sciences
Natural Language ProcessingLarge Language ModelLong Context Modeling
Z
Zhongtao Jiang
Institute of Automation, Chinese Academy of Sciences
Y
Yupu Hao
Institute of Automation, Chinese Academy of Sciences
Yuqiao Tan
Yuqiao Tan
Institute of Automation, Chinese Academy of Sciences
LLMs ReasoningLLMs Interpretability
S
Shizhu He
Institute of Automation, Chinese Academy of Sciences
J
Jun Zhao
Institute of Automation, Chinese Academy of Sciences
K
Kun Xu
Project Leader
K
Kang Liu
Institute of Automation, Chinese Academy of Sciences