Task-Aware Resolution Optimization for Visual Large Language Models

๐Ÿ“… 2025-10-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing vision-language large models (VLLMs) employ fixed input resolutions, limiting their adaptability to task-specific perceptual granularity requirements and thus degrading performance. To address this, we propose a task-aware adaptive resolution optimization framework. First, we systematically characterize the joint influence of image complexity and model uncertainty on optimal task resolution. Building upon this insight, we design a differentiable resolution selection mechanism grounded in an empirically derived formula. Furthermore, we introduce a parameter-efficient fine-tuning strategy that ensures stable cross-resolution transfer for arbitrary input sizes. Evaluated across diverse vision-language understanding tasks, our method consistently improves accuracy while preserving inference efficiency and cross-task generalization. This work establishes a novel paradigm for resolutionโ€“task co-optimization in VLLMs, advancing both theoretical understanding and practical deployment.

Technology Category

Application Category

๐Ÿ“ Abstract
Real-world vision-language applications demand varying levels of perceptual granularity. However, most existing visual large language models (VLLMs), such as LLaVA, pre-assume a fixed resolution for downstream tasks, which leads to subpar performance. To address this problem, we first conduct a comprehensive and pioneering investigation into the resolution preferences of different vision-language tasks, revealing a correlation between resolution preferences with image complexity, and uncertainty variance of the VLLM at different image input resolutions. Building on this insight, we propose an empirical formula to determine the optimal resolution for a given vision-language task, combining these two factors. Second, based on rigorous experiments, we propose a novel parameter-efficient fine-tuning technique to extend the visual input resolution of pre-trained VLLMs to the identified optimal resolution. Extensive experiments on various vision-language tasks validate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Optimizing image resolution for visual language model tasks
Addressing subpar performance from fixed input resolutions
Determining optimal resolution using image complexity and uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Determines optimal resolution using image complexity and uncertainty
Extends VLLM resolution via parameter-efficient fine-tuning technique
Empirically validates method across diverse vision-language tasks