π€ AI Summary
Long-tailed multi-label visual recognition faces two intertwined challenges: extreme label distribution skew and incompatibility between conventional multi-label semantic modeling and zero-shot architectures. Existing methods rely on scarce tail-class samples to learn unreliable label correlations, while models like CLIP are inherently designed for single-label matching and lack explicit mechanisms for global label relationship modeling. To address this, we propose the end-to-end Correlation-Aware Prompting and Integration (CAPI) networkβthe first to explicitly leverage the text encoder for capturing holistic label semantics. CAPI integrates learnable soft prompts with graph convolutional layers to enable label-aware semantic propagation. Additionally, we introduce a distribution-balanced focal loss and a class-aware reweighting scheme to mitigate tail-class bias. With only lightweight parameter tuning, CAPI achieves substantial gains in tail-class performance and sets new state-of-the-art results on VOC-LT, COCO-LT, and NUS-WIDE.
π Abstract
Long-tailed multi-label visual recognition poses a significant challenge, as images typically contain multiple labels with highly imbalanced class distributions, leading to biased models that favor head classes while underperforming on tail classes. Recent efforts have leveraged pre-trained vision-language models, such as CLIP, alongside long-tailed learning techniques to exploit rich visual-textual priors for improved performance. However, existing methods often derive semantic inter-class relationships directly from imbalanced datasets, resulting in unreliable correlations for tail classes due to data scarcity. Moreover, CLIP's zero-shot paradigm is optimized for single-label image-text matching, making it suboptimal for multi-label tasks. To address these issues, we propose the correlation adaptation prompt network (CAPNET), a novel end-to-end framework that explicitly models label correlations from CLIP's textual encoder. The framework incorporates a graph convolutional network for label-aware propagation and learnable soft prompts for refined embeddings. It utilizes a distribution-balanced Focal loss with class-aware re-weighting for optimized training under imbalance. Moreover, it improves generalization through test-time ensembling and realigns visual-textual modalities using parameter-efficient fine-tuning to avert overfitting on tail classes without compromising head class performance. Extensive experiments and ablation studies on benchmarks including VOC-LT, COCO-LT, and NUS-WIDE demonstrate that CAPNET achieves substantial improvements over state-of-the-art methods, validating its effectiveness for real-world long-tailed multi-label visual recognition.