TinySubNets: An efficient and low capacity continual learning strategy

📅 2024-12-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing continual learning approaches suffer from rapid model capacity saturation and underutilization of sparsity, limiting the number of learnable tasks and parameter efficiency. To address these issues, we propose a lightweight, architecture-aware continual learning framework that jointly integrates three novel components: (i) task-differentiated structured pruning, (ii) single-weight multi-task adaptive bit-width quantization, and (iii) explicit cross-task weight sharing. These mechanisms collaboratively enhance parameter reuse and knowledge transfer across tasks. Crucially, our method enables dynamic sparse model expansion during continual learning. Evaluated on mainstream benchmarks, it achieves superior accuracy over state-of-the-art methods while significantly improving model capacity utilization. Moreover, it reduces GPU memory consumption and computational overhead by large margins. Our approach establishes a new paradigm for continual learning—delivering high accuracy, low parameter count, and strong generalization across sequential tasks.

Technology Category

Application Category

📝 Abstract
Continual Learning (CL) is a highly relevant setting gaining traction in recent machine learning research. Among CL works, architectural and hybrid strategies are particularly effective due to their potential to adapt the model architecture as new tasks are presented. However, many existing solutions do not efficiently exploit model sparsity, and are prone to capacity saturation due to their inefficient use of available weights, which limits the number of learnable tasks. In this paper, we propose TinySubNets (TSN), a novel architectural CL strategy that addresses the issues through the unique combination of pruning with different sparsity levels, adaptive quantization, and weight sharing. Pruning identifies a subset of weights that preserve model performance, making less relevant weights available for future tasks. Adaptive quantization allows a single weight to be separated into multiple parts which can be assigned to different tasks. Weight sharing between tasks boosts the exploitation of capacity and task similarity, allowing for the identification of a better trade-off between model accuracy and capacity. These features allow TSN to efficiently leverage the available capacity, enhance knowledge transfer, and reduce computational resource consumption. Experimental results involving common benchmark CL datasets and scenarios show that our proposed strategy achieves better results in terms of accuracy than existing state-of-the-art CL strategies. Moreover, our strategy is shown to provide a significantly improved model capacity exploitation. Code released at: https://github.com/lifelonglab/tinysubnets.
Problem

Research questions and friction points this paper is trying to address.

Addresses capacity saturation in continual learning.
Enhances model sparsity exploitation efficiently.
Improves accuracy and resource consumption trade-off.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pruning with varied sparsity levels
Adaptive quantization for multi-task
Weight sharing boosts capacity use
🔎 Similar Papers
No similar papers found.
M
Marcin Pietro'n
AGH University of Krakow, Krakow, Poland
K
Kamil Faber
AGH University of Krakow, Krakow, Poland
D
Dominik Żurek
AGH University of Krakow, Krakow, Poland
Roberto Corizzo
Roberto Corizzo
American University
Data MiningBig DataContinual LearningMachine Learning