Adaptive Machine Learning for Resource-Constrained Environments

📅 2025-03-24
🏛️ Delta
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of resource-constrained IoT edge gateways—characterized by dynamic CPU load and continuous data streams—this paper proposes a lightweight online/continual learning framework for time-series forecasting. The framework leverages CPU utilization time-series data and integrates incremental ensemble modeling with adaptive feature representation, enabling real-time performance prediction under zero-shot and fine-tuning settings with low computational overhead and high accuracy. Evaluated on a real-world IoT gateway dataset, it achieves state-of-the-art (SOTA) prediction accuracy while reducing inference latency by over 60% and memory footprint by 5× compared to SOTA baselines such as Lag-Llama. Its core innovation lies in an end-to-end online learning architecture that simultaneously satisfies extreme resource constraints and robust model adaptability.

Technology Category

Application Category

📝 Abstract
The Internet of Things is an example domain where data is perpetually generated in ever-increasing quantities, reflecting the proliferation of connected devices and the formation of continuous data streams over time. Consequently, the demand for ad-hoc, cost-effective machine learning solutions must adapt to this evolving data influx. This study tackles the task of offloading in small gateways, exacerbated by their dynamic availability over time. An approach leveraging CPU utilization metrics using online and continual machine learning techniques is proposed to predict gateway availability. These methods are compared to popular machine learning algorithms and a recent time-series foundation model, Lag-Llama, for fine-tuned and zero-shot setups. Their performance is benchmarked on a dataset of CPU utilization measurements over time from an IoT gateway and focuses on model metrics such as prediction errors, training and inference times, and memory consumption. Our primary objective is to study new efficient ways to predict CPU performance in IoT environments. Across various scenarios, our findings highlight that ensemble and online methods offer promising results for this task in terms of accuracy while maintaining a low resource footprint.
Problem

Research questions and friction points this paper is trying to address.

Predicting gateway availability in resource-constrained IoT environments
Comparing machine learning methods for CPU utilization prediction
Optimizing accuracy and resource efficiency in dynamic IoT data streams
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online continual learning for gateway prediction
CPU metrics optimize resource-constrained ML
Ensemble methods balance accuracy and efficiency
🔎 Similar Papers
No similar papers found.
S
Sebastián A. Cajas Ordóñez
Ireland’s Centre for Artificial Intelligence (CeADAR), University College Dublin, D04 V2N9, Dublin, Ireland.
J
Jaydeep Samanta
Ireland’s Centre for Artificial Intelligence (CeADAR), University College Dublin, D04 V2N9, Dublin, Ireland.
Andrés L. Suárez-Cetrulo
Andrés L. Suárez-Cetrulo
CeADAR: Ireland's Centre for Artificial Intelligence – University College Dublin
machine learningdata streamsconcept driftstock trend predictionbig data analytics
R
R. S. Carbajo
Ireland’s Centre for Artificial Intelligence (CeADAR), University College Dublin, D04 V2N9, Dublin, Ireland.