Minion Gated Recurrent Unit for Continual Learning

๐Ÿ“… 2025-03-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational overhead, architectural redundancy, and training complexity of recurrent neural network (RNN) continual learning on edge devices, this paper proposes Minion GRU (MiRU), a lightweight GRU variant. MiRU eliminates conventional gating mechanisms and instead employs learnable scaling coefficients to dynamically update hidden statesโ€”enabling gate-free, parameter-efficient, and computationally lightweight continual learning. It achieves stable multi-task continual learning using only replay-based experience rehearsal, without requiring auxiliary regularization or interference-mitigation mechanisms. Experimental results demonstrate that MiRU accelerates training by 2.90ร— and reduces model parameters by 2.88ร— compared to standard GRU, while matching its accuracy on image sequence classification and NLP benchmark tasks. These improvements significantly enhance the feasibility and practicality of continual learning on resource-constrained edge platforms.

Technology Category

Application Category

๐Ÿ“ Abstract
The increasing demand for continual learning in sequential data processing has led to progressively complex training methodologies and larger recurrent network architectures. Consequently, this has widened the knowledge gap between continual learning with recurrent neural networks (RNNs) and their ability to operate on devices with limited memory and compute. To address this challenge, we investigate the effectiveness of simplifying RNN architectures, particularly gated recurrent unit (GRU), and its impact on both single-task and multitask sequential learning. We propose a new variant of GRU, namely the minion recurrent unit (MiRU). MiRU replaces conventional gating mechanisms with scaling coefficients to regulate dynamic updates of hidden states and historical context, reducing computational costs and memory requirements. Despite its simplified architecture, MiRU maintains performance comparable to the standard GRU while achieving 2.90x faster training and reducing parameter usage by 2.88x, as demonstrated through evaluations on sequential image classification and natural language processing benchmarks. The impact of model simplification on its learning capacity is also investigated by performing continual learning tasks with a rehearsal-based strategy and global inhibition. We find that MiRU demonstrates stable performance in multitask learning even when using only rehearsal, unlike the standard GRU and its variants. These features position MiRU as a promising candidate for edge-device applications.
Problem

Research questions and friction points this paper is trying to address.

Simplifies RNN architectures for efficient continual learning.
Reduces computational costs and memory requirements in sequential tasks.
Maintains performance while enabling faster training and fewer parameters.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simplified GRU with scaling coefficients
Reduced computational and memory costs
Enhanced multitask learning with rehearsal
๐Ÿ”Ž Similar Papers
No similar papers found.