🤖 AI Summary
Prompt tuning in vision-language models suffers from poor generalization, and conventional label smoothing (LS) further degrades performance. To address this, we propose Alternating Training-based Label Smoothing (AT-LS). AT-LS alternates optimization between hard and soft labels to mitigate the adverse impact of standard LS on prompt tuning. It further introduces class-level and instance-level offline soft labels to explicitly model inter-class semantic relationships and intra-class instance discriminability. Theoretical analysis and an efficient soft-label generation strategy ensure seamless compatibility with mainstream prompt-learning frameworks. Extensive experiments demonstrate that AT-LS consistently enhances the generalization of diverse prompt-tuning methods across multiple benchmarks, yielding average improvements of 1.2–2.8 percentage points. AT-LS is plug-and-play, framework-agnostic, and exhibits strong cross-task generalizability.
📝 Abstract
Recent advances in pre-trained vision-language models have demonstrated remarkable zero-shot generalization capabilities. To further enhance these models' adaptability to various downstream tasks, prompt tuning has emerged as a parameter-efficient fine-tuning method. However, despite its efficiency, the generalization ability of prompt remains limited. In contrast, label smoothing (LS) has been widely recognized as an effective regularization technique that prevents models from becoming over-confident and improves their generalization. This inspires us to explore the integration of LS with prompt tuning. However, we have observed that the vanilla LS even weakens the generalization ability of prompt tuning. To address this issue, we propose the Alternating Training-based Label Smoothing (ATLaS) method, which alternately trains with standard one-hot labels and soft labels generated by LS to supervise the prompt tuning. Moreover, we introduce two types of efficient offline soft labels, including Class-wise Soft Labels (CSL) and Instance-wise Soft Labels (ISL), to provide inter-class or instance-class relationships for prompt tuning. The theoretical properties of the proposed ATLaS method are analyzed. Extensive experiments demonstrate that the proposed ATLaS method, combined with CSL and ISL, consistently enhances the generalization performance of prompt tuning. Moreover, the proposed ATLaS method exhibits high compatibility with prevalent prompt tuning methods, enabling seamless integration into existing methods.