DDP: Dual-Decoupled Prompting for Multi-Label Class-Incremental Learning

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address semantic confusion arising from label co-occurrence and ambiguity between true negatives and false positives due to partial labeling in multi-label class-incremental learning (MLCIL), this paper proposes the Dual-Decoupling Prompting (DDP) framework. DDP pioneers the integration of prompt learning into MLCIL: it decouples category semantics via class-specific positive/negative prompt pairs and mitigates misclassification through a progressive confidence-based decoupling strategy. Historical prompts are frozen as knowledge anchors, while a cross-layer prompting mechanism enables parameter-efficient fine-tuning without replay. Evaluated on MS-COCO and PASCAL VOC, DDP significantly outperforms existing methods—achieving 80.2% mAP and 70.5% F1 on the challenging MS-COCO B40-C10 no-replay benchmark, the first such result. This work establishes the first efficient, scalable prompt-learning paradigm for MLCIL.

Technology Category

Application Category

📝 Abstract
Prompt-based methods have shown strong effectiveness in single-label class-incremental learning, but their direct extension to multi-label class-incremental learning (MLCIL) performs poorly due to two intrinsic challenges: semantic confusion from co-occurring categories and true-negative-false-positive confusion caused by partial labeling. We propose Dual-Decoupled Prompting (DDP), a replay-free and parameter-efficient framework that explicitly addresses both issues. DDP assigns class-specific positive-negative prompts to disentangle semantics and introduces Progressive Confidence Decoupling (PCD), a curriculum-inspired decoupling strategy that suppresses false positives. Past prompts are frozen as knowledge anchors, and interlayer prompting enhances efficiency. On MS-COCO and PASCAL VOC, DDP consistently outperforms prior methods and is the first replay-free MLCIL approach to exceed 80% mAP and 70% F1 under the standard MS-COCO B40-C10 benchmark.
Problem

Research questions and friction points this paper is trying to address.

Addresses semantic confusion from co-occurring categories
Mitigates true-negative-false-positive confusion in partial labeling
Enables replay-free multi-label class-incremental learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Assigns class-specific positive-negative prompts for semantic disentanglement
Introduces Progressive Confidence Decoupling to suppress false positives
Freezes past prompts as knowledge anchors for efficiency
🔎 Similar Papers
No similar papers found.
Kaile Du
Kaile Du
Southeast University
Continual learningClass-incremental learningMulti-label learning
Z
Zihan Ye
University of the Chinese Academy of Sciences
J
Junzhou Xie
School of Automation, Southeast University
Fan Lyu
Fan Lyu
NLPR, CASIA
Computer VisionMachine LearningArtificial Intelligence
Y
Yixi Shen
Suzhou University of Science and Technology
Yuyang Li
Yuyang Li
Institute for AI, Peking University
Robotic ManipulationTactile SensingHuman-Object Interaction
M
Miaoxuan Zhu
School of Automation, Southeast University
Fuyuan Hu
Fuyuan Hu
Professor of Suzhou University of Science and Technology
Machine LearningComputer Vision
L
Ling Shao
University of the Chinese Academy of Sciences
Guangcan Liu
Guangcan Liu
School of Automation, Southeast University