LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised Learning in Open-World Scenarios

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in open-world long-tailed semi-supervised learning (LTSSL)—including low-quality pseudo-labels, model overconfidence, and interference from out-of-distribution (OOD) samples—this paper proposes LoFT, a parameter-efficient fine-tuning framework, and its extension LoFT-OW. LoFT pioneers the integration of foundation models into the LTSSL paradigm, employing lightweight adaptation to generate high-confidence, high-accuracy pseudo-labels. LoFT-OW further incorporates OOD detection and adaptive pseudo-label refinement, significantly enhancing robustness to unknown classes. The method achieves stable training using only 1% unlabeled data and consistently surpasses state-of-the-art methods across multiple long-tailed semi-supervised benchmarks. Experimental results demonstrate superior efficacy and generalization under both class imbalance and open-world conditions, validating the framework’s efficiency and adaptability in realistic, distribution-shifted scenarios.

Technology Category

Application Category

📝 Abstract
Long-tailed learning has garnered increasing attention due to its wide applicability in real-world scenarios. Among existing approaches, Long-Tailed Semi-Supervised Learning (LTSSL) has emerged as an effective solution by incorporating a large amount of unlabeled data into the imbalanced labeled dataset. However, most prior LTSSL methods are designed to train models from scratch, which often leads to issues such as overconfidence and low-quality pseudo-labels. To address these challenges, we extend LTSSL into the foundation model fine-tuning paradigm and propose a novel framework: LoFT (Long-tailed semi-supervised learning via parameter-efficient Fine-Tuning). We demonstrate that fine-tuned foundation models can generate more reliable pseudolabels, thereby benefiting imbalanced learning. Furthermore, we explore a more practical setting by investigating semi-supervised learning under open-world conditions, where the unlabeled data may include out-of-distribution (OOD) samples. To handle this problem, we propose LoFT-OW (LoFT under Open-World scenarios) to improve the discriminative ability. Experimental results on multiple benchmarks demonstrate that our method achieves superior performance compared to previous approaches, even when utilizing only 1% of the unlabeled data compared with previous works.
Problem

Research questions and friction points this paper is trying to address.

Addresses overconfidence and low-quality pseudo-labels in long-tailed semi-supervised learning
Extends LTSSL to foundation model fine-tuning paradigm for improved reliability
Handles open-world scenarios with out-of-distribution samples in unlabeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient fine-tuning for long-tailed learning
Generating reliable pseudo-labels using foundation models
Open-world adaptation handling out-of-distribution samples
🔎 Similar Papers
No similar papers found.
J
Jiahao Chen
Renmin University of China
Z
Zhiyuan Huang
Renmin University of China
Yurou Liu
Yurou Liu
Renmin University of China
AI4Science
B
Bing Su
Renmin University of China