🤖 AI Summary
To address semantic confusion arising from label co-occurrence and ambiguity between true negatives and false positives due to partial labeling in multi-label class-incremental learning (MLCIL), this paper proposes the Dual-Decoupling Prompting (DDP) framework. DDP pioneers the integration of prompt learning into MLCIL: it decouples category semantics via class-specific positive/negative prompt pairs and mitigates misclassification through a progressive confidence-based decoupling strategy. Historical prompts are frozen as knowledge anchors, while a cross-layer prompting mechanism enables parameter-efficient fine-tuning without replay. Evaluated on MS-COCO and PASCAL VOC, DDP significantly outperforms existing methods—achieving 80.2% mAP and 70.5% F1 on the challenging MS-COCO B40-C10 no-replay benchmark, the first such result. This work establishes the first efficient, scalable prompt-learning paradigm for MLCIL.
📝 Abstract
Prompt-based methods have shown strong effectiveness in single-label class-incremental learning, but their direct extension to multi-label class-incremental learning (MLCIL) performs poorly due to two intrinsic challenges: semantic confusion from co-occurring categories and true-negative-false-positive confusion caused by partial labeling. We propose Dual-Decoupled Prompting (DDP), a replay-free and parameter-efficient framework that explicitly addresses both issues. DDP assigns class-specific positive-negative prompts to disentangle semantics and introduces Progressive Confidence Decoupling (PCD), a curriculum-inspired decoupling strategy that suppresses false positives. Past prompts are frozen as knowledge anchors, and interlayer prompting enhances efficiency. On MS-COCO and PASCAL VOC, DDP consistently outperforms prior methods and is the first replay-free MLCIL approach to exceed 80% mAP and 70% F1 under the standard MS-COCO B40-C10 benchmark.