🤖 AI Summary
Achieving scalable, low-power, end-to-end few-shot and continual learning on edge devices remains challenging due to the tight constraints on energy, area, and computational efficiency. Method: This work proposes the first unified learning-and-inference architecture, featuring a temporal convolutional network (TCN) for raw time-series data (e.g., audio) and a dual-mode matrix-free compute array enabling online training and inference. It introduces a reconfigurable dual-mode accelerator with only a 0.5% chip-area overhead. Contribution/Results: Fabricated in 40 nm CMOS, the design achieves 96.8% accuracy on 5-way 1-shot Omniglot classification, 82.2% accuracy in 250-class continual learning, and 93.3% accuracy on 12-class speech command inference—while consuming merely 3.1 μW. This work breaks a critical bottleneck in co-optimizing few-shot learning, continual learning, and efficient inference for ultra-low-power edge intelligence.
📝 Abstract
On-device learning at the edge enables low-latency, private personalization with improved long-term robustness and reduced maintenance costs. Yet, achieving scalable, low-power end-to-end on-chip learning, especially from real-world sequential data with a limited number of examples, is an open challenge. Indeed, accelerators supporting error backpropagation optimize for learning performance at the expense of inference efficiency, while simplified learning algorithms often fail to reach acceptable accuracy targets. In this work, we present Chameleon, leveraging three key contributions to solve these challenges. (i) A unified learning and inference architecture supports few-shot learning (FSL), continual learning (CL) and inference at only 0.5% area overhead to the inference logic. (ii) Long temporal dependencies are efficiently captured with temporal convolutional networks (TCNs), enabling the first demonstration of end-to-end on-chip FSL and CL on sequential data and inference on 16-kHz raw audio. (iii) A dual-mode, matrix-multiplication-free compute array allows either matching the power consumption of state-of-the-art inference-only keyword spotting (KWS) accelerators or enabling $4.3 imes$ higher peak GOPS. Fabricated in 40-nm CMOS, Chameleon sets new accuracy records on Omniglot for end-to-end on-chip FSL (96.8%, 5-way 1-shot, 98.8%, 5-way 5-shot) and CL (82.2% final accuracy for learning 250 classes with 10 shots), while maintaining an inference accuracy of 93.3% on the 12-class Google Speech Commands dataset at an extreme-edge power budget of 3.1 $mu$W.