IMLP: An Energy-Efficient Continual Learning Method for Tabular Data Streams

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address continual learning for tabular data streams on resource-constrained edge/mobile devices, existing replay-based methods incur unbounded memory and energy overhead over time. This work proposes an energy-efficient, memory-constant incremental learning framework: (1) a context-aware lightweight multilayer perceptron, integrated with a fixed-size sliding feature buffer and windowed scaled dot-product attention, eliminating replay storage entirely; (2) a shared feed-forward network and lightweight feature concatenation to further reduce model footprint. We introduce NetScore-T, a novel joint energy-efficiency–accuracy metric. Experiments demonstrate that our method achieves up to 27.6× and 85.5× higher energy efficiency than TabNet and TabPFN, respectively, while maintaining competitive average accuracy across benchmarks.

Technology Category

Application Category

📝 Abstract
Tabular data streams are rapidly emerging as a dominant modality for real-time decision-making in healthcare, finance, and the Internet of Things (IoT). These applications commonly run on edge and mobile devices, where energy budgets, memory, and compute are strictly limited. Continual learning (CL) addresses such dynamics by training models sequentially on task streams while preserving prior knowledge and consolidating new knowledge. While recent CL work has advanced in mitigating catastrophic forgetting and improving knowledge transfer, the practical requirements of energy and memory efficiency for tabular data streams remain underexplored. In particular, existing CL solutions mostly depend on replay mechanisms whose buffers grow over time and exacerbate resource costs. We propose a context-aware incremental Multi-Layer Perceptron (IMLP), a compact continual learner for tabular data streams. IMLP incorporates a windowed scaled dot-product attention over a sliding latent feature buffer, enabling constant-size memory and avoiding storing raw data. The attended context is concatenated with current features and processed by shared feed-forward layers, yielding lightweight per-segment updates. To assess practical deployability, we introduce NetScore-T, a tunable metric coupling balanced accuracy with energy for Pareto-aware comparison across models and datasets. IMLP achieves up to $27.6 imes$ higher energy efficiency than TabNet and $85.5 imes$ higher than TabPFN, while maintaining competitive average accuracy. Overall, IMLP provides an easy-to-deploy, energy-efficient alternative to full retraining for tabular data streams.
Problem

Research questions and friction points this paper is trying to address.

Addressing energy-efficient continual learning for tabular data streams
Mitigating memory growth in replay mechanisms for resource-limited devices
Enabling lightweight model updates while preventing catastrophic forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses context-aware incremental MLP for tabular data
Implements sliding latent feature buffer with attention
Introduces NetScore-T metric for energy-aware model evaluation