π€ AI Summary
Multi-label class-incremental learning (MLCIL) suffers from both label incompleteness and severe class imbalance, leading to incomplete historical knowledge retention and model bias toward long-tailed distributions. To address these challenges without exemplar replay, we propose Label-enhanced Learning and Linear Adaptation (L3A), a sample-free framework comprising two core components: (1) a pseudo-label generation module to mitigate label scarcity during incremental phases; and (2) a Weighted Analytic Classifier (WAC), which jointly models label incompleteness and long-tailed class distribution via sample-level dynamic weighting, yielding a closed-form solution under exemplar-free settingsβthe first such approach in MLCIL. L3A integrates multi-label loss modeling with closed-form neural network adaptation, significantly improving both knowledge retention and novel-class recognition. Extensive experiments demonstrate state-of-the-art performance on MS-COCO and PASCAL VOC. The code is publicly available.
π Abstract
Class-incremental learning (CIL) enables models to learn new classes continually without forgetting previously acquired knowledge. Multi-label CIL (MLCIL) extends CIL to a real-world scenario where each sample may belong to multiple classes, introducing several challenges: label absence, which leads to incomplete historical information due to missing labels, and class imbalance, which results in the model bias toward majority classes. To address these challenges, we propose Label-Augmented Analytic Adaptation (L3A), an exemplar-free approach without storing past samples. L3A integrates two key modules. The pseudo-label (PL) module implements label augmentation by generating pseudo-labels for current phase samples, addressing the label absence problem. The weighted analytic classifier (WAC) derives a closed-form solution for neural networks. It introduces sample-specific weights to adaptively balance the class contribution and mitigate class imbalance. Experiments on MS-COCO and PASCAL VOC datasets demonstrate that L3A outperforms existing methods in MLCIL tasks. Our code is available at https://github.com/scut-zx/L3A.