🤖 AI Summary
This work addresses the deployment challenges of Liquid State Machines (LSMs) in general-purpose AI, which stem from their sensitivity to hyperparameters and the neglect of energy consumption in conventional optimization approaches. To this end, we propose EARL, a novel framework that, for the first time, integrates energy awareness into LSM hyperparameter optimization. EARL synergistically combines Bayesian optimization with adaptive reinforcement learning to jointly optimize accuracy and energy efficiency. It further incorporates a surrogate model, dynamic candidate prioritization, and an early-stopping mechanism to significantly reduce computational overhead. Experimental results on three benchmark datasets demonstrate that EARL improves accuracy by 6%–15%, reduces energy consumption by 60%–80%, and decreases optimization time by nearly an order of magnitude compared to existing methods.
📝 Abstract
Pervasive AI increasingly depends on on-device learning systems that deliver low-latency and energy-efficient computation under strict resource constraints. Liquid State Machines (LSMs) offer a promising approach for low-power temporal processing in pervasive and neuromorphic systems, but their deployment remains challenging due to high hyperparameter sensitivity and the computational cost of traditional optimization methods that ignore energy constraints. This work presents EARL, an energy-aware reinforcement learning framework that integrates Bayesian optimization with an adaptive reinforcement learning based selection policy to jointly optimize accuracy and energy consumption. EARL employs surrogate modeling for global exploration, reinforcement learning for dynamic candidate prioritization, and an early termination mechanism to eliminate redundant evaluations, substantially reducing computational overhead. Experiments on three benchmark datasets demonstrate that EARL achieves 6 to 15 percent higher accuracy, 60 to 80 percent lower energy consumption, and up to an order of magnitude reduction in optimization time compared to leading hyperparameter tuning frameworks. These results highlight the effectiveness of energy-aware adaptive search in improving the efficiency and scalability of LSMs for resource-constrained on-device AI applications.