CKAA: Cross-subspace Knowledge Alignment and Aggregation for Robust Continual Learning

๐Ÿ“… 2025-07-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In parameter-efficient fine-tuning (PEFT) for continual learning, independently trained factorized modules suffer from feature subspace misalignment and ambiguous decision-making under misleading task IDs. To address this, we propose a two-level knowledge alignment and task-confidence-guided adapter mixing mechanism. Our method achieves distribution calibration and robust classification via cross-subspace feature alignment, joint optimization of the global classifier, and confidence-aware knowledge aggregation. Key contributions include: (1) explicit modeling and correction of feature subspace shifts among modular adapters; and (2) dynamic weighting of adapter outputs based on task confidence, significantly improving tolerance to erroneous task ID assignments. Experiments demonstrate that our approach consistently outperforms state-of-the-art PEFT methods across multiple continual learning benchmarks, with particularly notable gains in robustness when task identifiers are corrupted or ambiguous.

Technology Category

Application Category

๐Ÿ“ Abstract
Continual Learning (CL) empowers AI models to continuously learn from sequential task streams. Recently, parameter-efficient fine-tuning (PEFT)-based CL methods have garnered increasing attention due to their superior performance. They typically allocate a unique sub-module for learning each task, with a task recognizer to select the appropriate sub-modules for testing images. However, due to the feature subspace misalignment from independently trained sub-modules, these methods tend to produce ambiguous decisions under misleading task-ids. To address this, we propose Cross-subspace Knowledge Alignment and Aggregation (CKAA), a novel framework that enhances model robustness against misleading task-ids through two key innovations: (1) Dual-level Knowledge Alignment (DKA): By aligning intra-class feature distributions across different subspaces and learning a robust global classifier through a feature simulation process, DKA enables the model to distinguish features from both correct and incorrect subspaces during training. (2) Task-Confidence-guided Mixture of Adapters (TC-MoA): A robust inference scheme that adaptively aggregates task-specific knowledge from relevant sub-modules based on task-confidence scores, avoiding overconfidence in misleading task-id predictions. Extensive experiments demonstrate that CKAA outperforms existing PEFT-based CL methods.
Problem

Research questions and friction points this paper is trying to address.

Aligns feature subspaces to reduce ambiguous decisions
Enhances robustness against misleading task identifiers
Aggregates task-specific knowledge adaptively during inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns intra-class features across subspaces
Learns robust global classifier via simulation
Adaptively aggregates knowledge using task-confidence
๐Ÿ”Ž Similar Papers
No similar papers found.
L
Lingfeng He
School of Telecommunications Engineering, Xidian University, Xiโ€™an, China
De Cheng
De Cheng
Associate Professor, Xidian University
Computer VisionDeep LearningMachine LearningData Compression
Z
Zhiheng Ma
SUAT-Faculty of Computational Microelectronics, Shenzhen University of Advanced Technology, Shenzhen, China
H
Huaijie Wang
School of Electronic Engineering, Xidian University, Xiโ€™an, China
D
Dingwen Zhang
Brain and Artificial Intelligence Laboratory, Northwestern Polytechnical University, Xiโ€™an, China
Nannan Wang
Nannan Wang
Professor, Xidian University
Computer VisionMachine LearningPattern Recognition
X
Xinbo Gao
School of Electronic Engineering, Xidian University, Xiโ€™an, China