๐ค AI Summary
This work addresses anomaly detection on resource-constrained microcontrollers by proposing a โTrain Once, Share Everywhereโ (TOSE) paradigm that eliminates the need for per-device retraining. The approach leverages a lightweight K-Means clustering algorithm to perform local feature extraction and adaptive threshold estimation, integrated within a Distributed Internet-of-Learning (DIoL) framework. By encoding models into a textual representation, the method enables direct cross-device model reuse without additional training. Experimental validation on a dual-device prototype demonstrates that the shared model maintains consistent detection performance, achieves inference speeds comparable to independently deployed models, and incurs negligible parsing overhead. This significantly enhances deployment efficiency and system scalability in edge environments with limited computational resources.
๐ Abstract
This paper presents a lightweight K-Means anomaly detection model and a distributed model-sharing workflow designed for resource-constrained microcontrollers (MCUs). Using real power measurements from a mini-fridge appliance, the system performs on-device feature extraction, clustering, and threshold estimation to identify abnormal appliance behavior. To avoid retraining models on every device, we introduce the Distributed Internet of Learning (DIoL), which enables a model trained on one MCU to be exported as a portable, text-based representation and reused directly on other devices. A two-device prototype demonstrates the feasibility of the "Train Once, Share Everywhere" (TOSE) approach using a real-world appliance case study, where Device A trains the model and Device B performs inference without retraining. Experimental results show consistent anomaly detection behavior, negligible parsing overhead, and identical inference runtimes between standalone and DIoL-based operation. The proposed framework enables scalable, low-cost TinyML deployment across fleets of embedded devices.