🤖 AI Summary
This work addresses the stability-plasticity dilemma in few-shot adaptation of large-scale vision-language models like CLIP, as well as the boundary bias and lack of global structural regularization inherent in existing training-free methods. To this end, the authors propose a novel training-free collaborative adaptation framework that, for the first time, introduces global proximal regularization in reproducing kernel Hilbert space to one-shot vision-language adaptation. The method leverages semantic knowledge from both CLIP and GPT-3 to generate cross-modal bridging samples and incorporates a multi-scale adaptive RBF kernel to enhance robustness. Through a four-stage optimization pipeline—comprising hybrid prior construction, support set augmentation, distribution correction, and multi-scale kernel ensemble—the approach achieves an average accuracy of 65.83% across 11 benchmarks, setting a new state-of-the-art in one-shot vision-language adaptation.
📝 Abstract
The adaptation of large-scale Vision-Language Models (VLMs) like CLIP to downstream tasks with extremely limited data -- specifically in the one-shot regime -- is often hindered by a significant "Stability-Plasticity" dilemma. While efficient caching mechanisms have been introduced by training-free methods such as Tip-Adapter, these approaches often function as local Nadaraya-Watson estimators. Such estimators are characterized by inherent boundary bias and a lack of global structural regularization. In this paper, ReHARK (Refined Hybrid Adaptive RBF Kernels) is proposed as a synergistic training-free framework that reinterprets few-shot adaptation through global proximal regularization in a Reproducing Kernel Hilbert Space (RKHS). A multistage refinement pipeline is introduced, consisting of: (1) Hybrid Prior Construction, where zero-shot textual knowledge from CLIP and GPT-3 is fused with visual class prototypes to form a robust semantic-visual anchor; (2) Support Set Augmentation (Bridging), where intermediate samples are generated to smooth the transition between visual and textual modalities; (3) Adaptive Distribution Rectification, where test feature statistics are aligned with the augmented support set to mitigate domain shifts; and (4) Multi-Scale RBF Kernels, where an ensemble of kernels is employed to capture complex feature geometries across diverse scales. Superior stability and accuracy are demonstrated through extensive experiments on 11 diverse benchmarks. A new state-of-the-art for one-shot adaptation is established by ReHARK, which achieves an average accuracy of 65.83%, significantly outperforming existing baselines. Code is available at https://github.com/Jahid12012021/ReHARK.