Critical Patch-Aware Sparse Prompting with Decoupled Training for Continual Learning on the Edge

๐Ÿ“… 2026-04-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of continual learning on edge devices, where training-phase memory and computational overhead severely constrain performance, yet existing approaches often overlook training efficiency. To this end, we propose CPS-Prompt, a novel framework that integrates key image patch-aware sparse prompting with decoupled promptโ€“classifier training (DPCT). This design substantially reduces the number of tokens processed and the cost of backpropagation during training. Our method achieves high accuracy while significantly lowering memory consumption, training time, and energy usage. Experiments on three public benchmarks and real edge hardware demonstrate that, compared to the CODA-Prompt baseline, CPS-Prompt yields approximately 1.6ร— improvements in peak memory, training time, and energy efficiency, with average accuracy less than 2% below that of the current state-of-the-art C-Prompt and on par with CODA-Prompt.
๐Ÿ“ Abstract
Continual learning (CL) on edge devices requires not only high accuracy but also training-time efficiency to support on-device adaptation under strict memory and computational constraints. While prompt-based continual learning (PCL) is parameter-efficient and achieves competitive accuracy, prior work has focused mainly on accuracy or inference-time performance, often overlooking the memory and computational costs of on-device training. In this paper, we propose CPS-Prompt, a critical patch-aware sparse prompting framework that explicitly targets training-time memory usage and computational cost by integrating critical patch sampling (CPS) for task-aware token reduction and decoupled prompt and classifier training (DPCT) to reduce backpropagation overhead. Experiments on three public benchmarks and real edge hardware show that CPS-Prompt improves peak memory, training time, and energy efficiency by about 1.6x over the balanced CODA-Prompt baseline, while maintaining accuracy within 2% of the state-of-the-art C-Prompt on average and remaining competitive with CODA-Prompt in accuracy. The code is available at https://github.com/laymond1/cps-prompt.
Problem

Research questions and friction points this paper is trying to address.

continual learning
edge devices
training efficiency
memory constraints
computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

critical patch sampling
sparse prompting
decoupled training
continual learning
edge computing
๐Ÿ”Ž Similar Papers
No similar papers found.