Enhancing Features in Long-tailed Data Using Large Vision Mode

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak discriminability of tail-class features and overreliance on language modality in long-tailed visual recognition, this paper proposes a purely vision-based feature enhancement method. Without incorporating textual supervision, it leverages large vision models (LVMs) or vision foundation models (VFMs) to extract generic visual representations, which are then injected into a baseline network via a cross-space fusion mechanism operating jointly on feature maps and latent space. A novel prototype-guided multi-prototype contrastive loss is introduced to enhance inter-tail-class separability without linguistic priors. Crucially, this work is the first to fully decouple LVMs/VFMs from language modalities for long-tailed feature enhancement. Extensive experiments on ImageNet-LT and iNaturalist2018 demonstrate significant improvements in tail-class accuracy, with overall top-1 accuracy surpassing state-of-the-art methods by 2.3–3.7%.

Technology Category

Application Category

📝 Abstract
Language-based foundation models, such as large language models (LLMs) or large vision-language models (LVLMs), have been widely studied in long-tailed recognition. However, the need for linguistic data is not applicable to all practical tasks. In this study, we aim to explore using large vision models (LVMs) or visual foundation models (VFMs) to enhance long-tailed data features without any language information. Specifically, we extract features from the LVM and fuse them with features in the baseline network's map and latent space to obtain the augmented features. Moreover, we design several prototype-based losses in the latent space to further exploit the potential of the augmented features. In the experimental section, we validate our approach on two benchmark datasets: ImageNet-LT and iNaturalist2018.
Problem

Research questions and friction points this paper is trying to address.

Enhancing long-tailed data features without language information
Fusing LVM features with baseline network for augmented features
Designing prototype-based losses to exploit augmented features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large vision models for feature enhancement
Fuses LVM features with baseline network features
Applies prototype-based losses in latent space
🔎 Similar Papers
No similar papers found.
P
Pengxiao Han
The Australian National University, Canberra, Australia
Changkun Ye
Changkun Ye
PhD, Australian National University
Reinforcement LearningProbabilistic Machine LearningDistribution Shift
Jinguang Tong
Jinguang Tong
Australian National University
computer vision3d reconstruction
C
Cuicui Jiang
HKUST, Hong Kong SAR
J
Jie Hong
The University of Hong Kong, Hong Kong SAR
L
Li Fang
Chinese Academy of Sciences, China
X
Xuesong Li
The Australian National University & CSIRO, Canberra, Australia