๐ค AI Summary
Existing vision-language large models (VLLMs) employ fixed input resolutions, limiting their adaptability to task-specific perceptual granularity requirements and thus degrading performance. To address this, we propose a task-aware adaptive resolution optimization framework. First, we systematically characterize the joint influence of image complexity and model uncertainty on optimal task resolution. Building upon this insight, we design a differentiable resolution selection mechanism grounded in an empirically derived formula. Furthermore, we introduce a parameter-efficient fine-tuning strategy that ensures stable cross-resolution transfer for arbitrary input sizes. Evaluated across diverse vision-language understanding tasks, our method consistently improves accuracy while preserving inference efficiency and cross-task generalization. This work establishes a novel paradigm for resolutionโtask co-optimization in VLLMs, advancing both theoretical understanding and practical deployment.
๐ Abstract
Real-world vision-language applications demand varying levels of perceptual granularity. However, most existing visual large language models (VLLMs), such as LLaVA, pre-assume a fixed resolution for downstream tasks, which leads to subpar performance. To address this problem, we first conduct a comprehensive and pioneering investigation into the resolution preferences of different vision-language tasks, revealing a correlation between resolution preferences with image complexity, and uncertainty variance of the VLLM at different image input resolutions. Building on this insight, we propose an empirical formula to determine the optimal resolution for a given vision-language task, combining these two factors. Second, based on rigorous experiments, we propose a novel parameter-efficient fine-tuning technique to extend the visual input resolution of pre-trained VLLMs to the identified optimal resolution. Extensive experiments on various vision-language tasks validate the effectiveness of our method.