๐ค AI Summary
Traditional context prompting ensembles average textual features in the feature space, often causing class-center shifts that impair generalization in few-shot vision-language models (VLMs). To address this, we propose Cluster-Aware Prompt Ensemble Learning (CAPEL), which performs prompt ensembling in the classification logits spaceโthereby avoiding distribution distortion induced by feature averaging. Our key contributions are: (1) a clustering-aware prompt assignment strategy that groups semantically similar prompts; (2) logits-space weighted ensemble integration; and (3) a cluster-preserving regularization term coupled with an adaptive prompt weighting mechanism, explicitly maintaining discriminability and robustness within each prompt cluster. Extensive experiments across multiple few-shot vision benchmarks demonstrate that CAPEL consistently outperforms state-of-the-art methods, effectively mitigating prompt collapse and degradation in cross-dataset generalization.
๐ Abstract
Vision-language models (VLMs) such as CLIP achieve zero-shot transfer across various tasks by pre-training on numerous image-text pairs. These models often benefit from using an ensemble of context prompts to represent a class. Despite being effective, conventional prompt ensembling that averages textual features of context prompts often yields suboptimal results. This is because feature averaging shifts the class centroids away from the true class distribution. To address this issue, we propose the Cluster-Aware Prompt Ensemble Learning (CAPEL) framework, which preserves the cluster nature of context prompts. CAPEL classifies images into one of several class clusters, each represented by a distinct prompt. Instead of ensembling prompts in the feature space, we perform ensembling in the classification logits space, aligning better with the visual feature distribution. To further optimize prompt fine-tuning while maintaining cluster-specific discriminative power, we introduce a cluster-preserving regularization term. This ensures that prompts remain distinct and specialized for different clusters, preventing collapse into a uniform direction. Additionally, we integrate an adaptive prompt weighting technique to dynamically adjust the attention weights for flawed or ambiguous prompts, ensuring robust performance across diverse datasets and tasks.