LIFT+: Lightweight Fine-Tuning for Long-Tail Learning

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In long-tailed learning, reparameterization-based fine-tuning often causes severe performance degradation on tail classes, primarily due to class-conditional distribution inconsistency. This work is the first to identify and formalize this mechanism. We propose LIFT+, a lightweight fine-tuning framework comprising semantic-aware initialization, minimal data augmentation, and test-time ensembling—updating fewer than 1% of parameters and reducing training epochs from ~100 to ≤15. LIFT+ establishes a novel lightweight fine-tuning paradigm centered on distribution consistency optimization, significantly enhancing tail-class generalization while preserving computational efficiency. Extensive experiments across multiple long-tailed benchmarks demonstrate that LIFT+ consistently surpasses state-of-the-art methods, achieving simultaneous breakthroughs in both accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
The fine-tuning paradigm has emerged as a prominent approach for addressing long-tail learning tasks in the era of foundation models. However, the impact of fine-tuning strategies on long-tail learning performance remains unexplored. In this work, we disclose that existing paradigms exhibit a profound misuse of fine-tuning methods, leaving significant room for improvement in both efficiency and accuracy. Specifically, we reveal that heavy fine-tuning (fine-tuning a large proportion of model parameters) can lead to non-negligible performance deterioration on tail classes, whereas lightweight fine-tuning demonstrates superior effectiveness. Through comprehensive theoretical and empirical validation, we identify this phenomenon as stemming from inconsistent class conditional distributions induced by heavy fine-tuning. Building on this insight, we propose LIFT+, an innovative lightweight fine-tuning framework to optimize consistent class conditions. Furthermore, LIFT+ incorporates semantic-aware initialization, minimalist data augmentation, and test-time ensembling to enhance adaptation and generalization of foundation models. Our framework provides an efficient and accurate pipeline that facilitates fast convergence and model compactness. Extensive experiments demonstrate that LIFT+ significantly reduces both training epochs (from $sim$100 to $leq$15) and learned parameters (less than 1%), while surpassing state-of-the-art approaches by a considerable margin. The source code is available at https://github.com/shijxcs/LIFT-plus.
Problem

Research questions and friction points this paper is trying to address.

Explores fine-tuning impact on long-tail learning performance
Addresses performance deterioration in tail classes from heavy fine-tuning
Proposes LIFT+ for efficient lightweight fine-tuning and better accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight fine-tuning for long-tail learning
Semantic-aware initialization and data augmentation
Test-time ensembling enhances model generalization
🔎 Similar Papers
No similar papers found.