LoRA-TTT: Low-Rank Test-Time Training for Vision-Language Models

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations of existing test-time training (TTT) methods for vision-language models (e.g., CLIP) under distribution shift: heavy reliance on text prompt tuning, high computational overhead, and overdependence on entropy minimization. We propose a lightweight image-side TTT framework that, for the first time, integrates Low-Rank Adaptation (LoRA) into TTT—updating only low-rank adaptation parameters in the image encoder while freezing both the text encoder and the backbone. To enable efficient multi-domain adaptation without additional memory cost, we introduce a reconstruction loss jointly optimized with entropy minimization. Evaluated across 15 benchmarks, our method significantly improves zero-shot Top-1 accuracy of CLIP-ViT-B/16: +5.79% on average over out-of-distribution datasets and +1.36% on fine-grained recognition—outperforming state-of-the-art test-time prompt tuning approaches.

Technology Category

Application Category

📝 Abstract
The rapid advancements in vision-language models (VLMs), such as CLIP, have intensified the need to address distribution shifts between training and testing datasets. Although prior Test-Time Training (TTT) techniques for VLMs have demonstrated robust performance, they predominantly rely on tuning text prompts, a process that demands substantial computational resources and is heavily dependent on entropy-based loss. In this paper, we propose LoRA-TTT, a novel TTT method that leverages Low-Rank Adaptation (LoRA), applied exclusively to the image encoder of VLMs. By introducing LoRA and updating only its parameters during test time, our method offers a simple yet effective TTT approach, retaining the model's initial generalization capability while achieving substantial performance gains with minimal memory and runtime overhead. Additionally, we introduce a highly efficient reconstruction loss tailored for TTT. Our method can adapt to diverse domains by combining these two losses, without increasing memory consumption or runtime. Extensive experiments on two benchmarks, covering 15 datasets, demonstrate that our method improves the zero-shot top-1 accuracy of CLIP-ViT-B/16 by an average of 5.79% on the OOD benchmark and 1.36% on the fine-grained benchmark, efficiently surpassing test-time prompt tuning, without relying on any external models or cache.
Problem

Research questions and friction points this paper is trying to address.

Address distribution shifts in vision-language models
Reduce computational resources in test-time training
Improve zero-shot accuracy with minimal overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-Rank Adaptation (LoRA)
Efficient reconstruction loss
Minimal memory and runtime overhead