🤖 AI Summary
This work addresses the limitation of existing vision-language model adaptation methods that employ a uniform architecture, thereby neglecting the inherent structural differences between image and text modalities and constraining downstream performance. To overcome this, the authors propose HeBA, a novel adapter framework that introduces heterogeneous architectural design for the first time: two-dimensional depthwise separable convolutions are applied to visual tokens, while dense linear projections are used for textual tokens. The approach further incorporates bottleneck compression—reducing dimensionality to D/4—and Kaiming initialization with non-zero gradients to inject modality-specific inductive biases. This design significantly accelerates convergence and enhances robustness, achieving state-of-the-art performance across 11 few-shot benchmarks with consistently higher accuracy and stability.
📝 Abstract
Adapting large-scale Vision-Language Models (VLMs) like CLIP to downstream tasks often suffers from a "one-size-fits-all" architectural approach, where visual and textual tokens are processed uniformly by wide, generic adapters. We argue that this homogeneity ignores the distinct structural nature of the modalities -- spatial locality in images versus semantic density in text. To address this, we propose HeBA (Heterogeneous Bottleneck Adapter), a unified architectural framework that introduces modality-specific structural inductive biases. HeBA departs from conventional designs through three key architectural innovations: (1) Heterogeneity: It processes visual tokens via 2D depthwise-separable convolutions to preserve spatial correlations, while distinctively processing text tokens via dense linear projections to capture semantic relationships; (2) Bottleneck Regularization: Unlike standard expanding adapters, HeBA employs a compression bottleneck (D -> D/4) that explicitly forces the model to learn compact, robust features and acts as a structural regularizer; and (3) Active Gradient Initialization: We challenge the restrictive zero-initialization paradigm, utilizing a Kaiming initialization strategy that ensures sufficient initial gradient flow to accelerate convergence without compromising the frozen backbone's pre-trained knowledge. Extensive experiments demonstrate that HeBA's architecturally specialized design achieves superior stability and accuracy, establishing a new state-of-the-art on 11 few-shot benchmarks. Code is available at https://github.com/Jahid12012021/VLM-HeBA.