Towards Macro-AUC oriented Imbalanced Multi-Label Continual Learning

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses multi-label continual learning (MLCL) under severe class imbalance, being the first to directly optimize Macro-AUC—thereby mitigating AUC estimation bias arising from extreme positive–negative sample imbalance. We propose the reweighted label-distribution-aware margin loss (RLDAM), which explicitly models label-wise imbalance and calibrates classification margins. Additionally, we introduce the weight-preserving reservoir update (WRU) strategy, which dynamically maintains statistically faithful estimates of positive/negative sample ratios per label within limited memory. Theoretically, we establish the first generalization error upper bound for Macro-AUC optimization in MLCL. Empirically, our method achieves significant improvements over state-of-the-art baselines across multiple imbalanced MLCL benchmarks. The code is publicly available.

Technology Category

Application Category

📝 Abstract
In Continual Learning (CL), while existing work primarily focuses on the multi-class classification task, there has been limited research on Multi-Label Learning (MLL). In practice, MLL datasets are often class-imbalanced, making it inherently challenging, a problem that is even more acute in CL. Due to its sensitivity to imbalance, Macro-AUC is an appropriate and widely used measure in MLL. However, there is no research to optimize Macro-AUC in MLCL specifically. To fill this gap, in this paper, we propose a new memory replay-based method to tackle the imbalance issue for Macro-AUC-oriented MLCL. Specifically, inspired by recent theory work, we propose a new Reweighted Label-Distribution-Aware Margin (RLDAM) loss. Furthermore, to be compatible with the RLDAM loss, a new memory-updating strategy named Weight Retain Updating (WRU) is proposed to maintain the numbers of positive and negative instances of the original dataset in memory. Theoretically, we provide superior generalization analyses of the RLDAM-based algorithm in terms of Macro-AUC, separately in batch MLL and MLCL settings. This is the first work to offer theoretical generalization analyses in MLCL to our knowledge. Finally, a series of experimental results illustrate the effectiveness of our method over several baselines. Our codes are available at https://github.com/ML-Group-SDU/Macro-AUC-CL.
Problem

Research questions and friction points this paper is trying to address.

Multi-label Learning
Imbalanced Data
Macro-AUC Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-label Learning
Macro-AUC Optimization
RLDAM Loss Function
🔎 Similar Papers
No similar papers found.
Y
Yan Zhang
School of Software, Shandong University
Guoqiang Wu
Guoqiang Wu
Associate Professor, Shandong University
Machine LearningLearning TheoryReinforcement Learning
Bingzheng Wang
Bingzheng Wang
Institute of Information Engineering, Chinese Academy of Sciences
reinforcement learningconfidential computingprivacy inference
T
Teng Pang
School of Software, Shandong University
H
Haoliang Sun
School of Software, Shandong University
Y
Yilong Yin
School of Software, Shandong University