LODAP: On-Device Incremental Learning Via Lightweight Operations and Data Pruning

πŸ“… 2025-04-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of deploying computationally intensive remote incremental learning on resource-constrained edge devices, this paper proposes an efficient on-device online incremental learning framework. Our method introduces a lightweight convolutional adapter moduleβ€”Edge Incremental Module (EIM)β€”to dynamically expand the model for recognizing novel classes without full retraining. It further integrates incremental feature recalibration with training data pruning to substantially reduce memory footprint and computational overhead. Evaluated on CIFAR-100 and Tiny-ImageNet, our approach achieves up to 4.32% absolute accuracy gain over baselines, while reducing model parameters and FLOPs by approximately 50%. End-to-end deployment is validated on real-edge hardware (e.g., NVIDIA Jetson Nano). To the best of our knowledge, this is the first work to synergistically combine lightweight adapters and data pruning for edge-side incremental learning, effectively balancing classification accuracy, inference efficiency, and practical deployability.

Technology Category

Application Category

πŸ“ Abstract
Incremental learning that learns new classes over time after the model's deployment is becoming increasingly crucial, particularly for industrial edge systems, where it is difficult to communicate with a remote server to conduct computation-intensive learning. As more classes are expected to learn after their execution for edge devices. In this paper, we propose LODAP, a new on-device incremental learning framework for edge systems. The key part of LODAP is a new module, namely Efficient Incremental Module (EIM). EIM is composed of normal convolutions and lightweight operations. During incremental learning, EIM exploits some lightweight operations, called adapters, to effectively and efficiently learn features for new classes so that it can improve the accuracy of incremental learning while reducing model complexity as well as training overhead. The efficiency of LODAP is further enhanced by a data pruning strategy that significantly reduces the training data, thereby lowering the training overhead. We conducted extensive experiments on the CIFAR-100 and Tiny- ImageNet datasets. Experimental results show that LODAP improves the accuracy by up to 4.32% over existing methods while reducing around 50% of model complexity. In addition, evaluations on real edge systems demonstrate its applicability for on-device machine learning. The code is available at https://github.com/duanbiqing/LODAP.
Problem

Research questions and friction points this paper is trying to address.

Enables on-device incremental learning for edge systems
Reduces model complexity and training overhead
Improves accuracy while learning new classes incrementally
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses lightweight adapters for efficient learning
Incorporates data pruning to reduce overhead
Combines normal and lightweight convolutions
πŸ”Ž Similar Papers
No similar papers found.
B
Biqing Duan
School of Software, Yunnan University, Kunming, China
Q
Qing Wang
School of Software, Yunnan University, Kunming, China
D
Di Liu
Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
W
Wei Zhou
School of Software, Yunnan University, Kunming, China
Zhenli He
Zhenli He
Yunnan University
Edge Computing
S
Shengfa Miao
School of Software, Yunnan University, Kunming, China