🤖 AI Summary
To address the weak discriminability of tail-class features and overreliance on language modality in long-tailed visual recognition, this paper proposes a purely vision-based feature enhancement method. Without incorporating textual supervision, it leverages large vision models (LVMs) or vision foundation models (VFMs) to extract generic visual representations, which are then injected into a baseline network via a cross-space fusion mechanism operating jointly on feature maps and latent space. A novel prototype-guided multi-prototype contrastive loss is introduced to enhance inter-tail-class separability without linguistic priors. Crucially, this work is the first to fully decouple LVMs/VFMs from language modalities for long-tailed feature enhancement. Extensive experiments on ImageNet-LT and iNaturalist2018 demonstrate significant improvements in tail-class accuracy, with overall top-1 accuracy surpassing state-of-the-art methods by 2.3–3.7%.
📝 Abstract
Language-based foundation models, such as large language models (LLMs) or large vision-language models (LVLMs), have been widely studied in long-tailed recognition. However, the need for linguistic data is not applicable to all practical tasks. In this study, we aim to explore using large vision models (LVMs) or visual foundation models (VFMs) to enhance long-tailed data features without any language information. Specifically, we extract features from the LVM and fuse them with features in the baseline network's map and latent space to obtain the augmented features. Moreover, we design several prototype-based losses in the latent space to further exploit the potential of the augmented features. In the experimental section, we validate our approach on two benchmark datasets: ImageNet-LT and iNaturalist2018.