π€ AI Summary
To address the challenge of deploying computationally intensive remote incremental learning on resource-constrained edge devices, this paper proposes an efficient on-device online incremental learning framework. Our method introduces a lightweight convolutional adapter moduleβEdge Incremental Module (EIM)βto dynamically expand the model for recognizing novel classes without full retraining. It further integrates incremental feature recalibration with training data pruning to substantially reduce memory footprint and computational overhead. Evaluated on CIFAR-100 and Tiny-ImageNet, our approach achieves up to 4.32% absolute accuracy gain over baselines, while reducing model parameters and FLOPs by approximately 50%. End-to-end deployment is validated on real-edge hardware (e.g., NVIDIA Jetson Nano). To the best of our knowledge, this is the first work to synergistically combine lightweight adapters and data pruning for edge-side incremental learning, effectively balancing classification accuracy, inference efficiency, and practical deployability.
π Abstract
Incremental learning that learns new classes over time after the model's deployment is becoming increasingly crucial, particularly for industrial edge systems, where it is difficult to communicate with a remote server to conduct computation-intensive learning. As more classes are expected to learn after their execution for edge devices. In this paper, we propose LODAP, a new on-device incremental learning framework for edge systems. The key part of LODAP is a new module, namely Efficient Incremental Module (EIM). EIM is composed of normal convolutions and lightweight operations. During incremental learning, EIM exploits some lightweight operations, called adapters, to effectively and efficiently learn features for new classes so that it can improve the accuracy of incremental learning while reducing model complexity as well as training overhead. The efficiency of LODAP is further enhanced by a data pruning strategy that significantly reduces the training data, thereby lowering the training overhead. We conducted extensive experiments on the CIFAR-100 and Tiny- ImageNet datasets. Experimental results show that LODAP improves the accuracy by up to 4.32% over existing methods while reducing around 50% of model complexity. In addition, evaluations on real edge systems demonstrate its applicability for on-device machine learning. The code is available at https://github.com/duanbiqing/LODAP.