Towards Minimal Fine-Tuning of VLMs

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of preserving text reasoning capabilities while improving computational efficiency during visual language model (VLM) fine-tuning, this paper proposes Image-LoRA—a lightweight, parameter-efficient fine-tuning method targeting the visual token pathway. Its core contributions are: (1) introducing vision-specific low-rank adaptation exclusively in the value projection branch of attention layers; (2) dynamically selecting high-contribution attention heads via rank-1 head influence estimation; and (3) employing a selection-scale normalization mechanism to ensure stable layer-wise updates. Experiments demonstrate that Image-LoRA matches standard LoRA’s accuracy on referring expression understanding and grounding tasks, reduces training FLOPs by approximately a factor equal to the visual token proportion, significantly decreases trainable parameters, and fully preserves the VLM’s native text reasoning capability—evidenced by zero degradation in GSM8K accuracy.

Technology Category

Application Category

📝 Abstract
We introduce Image-LoRA, a lightweight parameter efficient fine-tuning (PEFT) recipe for transformer-based vision-language models (VLMs). Image-LoRA applies low-rank adaptation only to the value path of attention layers within the visual-token span, reducing adapter-only training FLOPs roughly in proportion to the visual-token fraction. We further adapt only a subset of attention heads, selected using head influence scores estimated with a rank-1 Image-LoRA, and stabilize per-layer updates via selection-size normalization. Across screen-centric grounding and referring benchmarks spanning text-heavy to image-heavy regimes, Image-LoRA matches or closely approaches standard LoRA accuracy while using fewer trainable parameters and lower adapter-only training FLOPs. The method also preserves the pure-text reasoning performance of VLMs before and after fine-tuning, as further shown on GSM8K.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational cost in vision-language model fine-tuning
Maintains model accuracy with fewer trainable parameters
Preserves text reasoning ability during visual adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank adaptation to visual attention values
Selective fine-tuning of influential attention heads
Normalization for stable per-layer updates
🔎 Similar Papers
No similar papers found.