K-Means Based TinyML Anomaly Detection and Distributed Model Reuse via the Distributed Internet of Learning (DIoL)

๐Ÿ“… 2026-03-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses anomaly detection on resource-constrained microcontrollers by proposing a โ€œTrain Once, Share Everywhereโ€ (TOSE) paradigm that eliminates the need for per-device retraining. The approach leverages a lightweight K-Means clustering algorithm to perform local feature extraction and adaptive threshold estimation, integrated within a Distributed Internet-of-Learning (DIoL) framework. By encoding models into a textual representation, the method enables direct cross-device model reuse without additional training. Experimental validation on a dual-device prototype demonstrates that the shared model maintains consistent detection performance, achieves inference speeds comparable to independently deployed models, and incurs negligible parsing overhead. This significantly enhances deployment efficiency and system scalability in edge environments with limited computational resources.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper presents a lightweight K-Means anomaly detection model and a distributed model-sharing workflow designed for resource-constrained microcontrollers (MCUs). Using real power measurements from a mini-fridge appliance, the system performs on-device feature extraction, clustering, and threshold estimation to identify abnormal appliance behavior. To avoid retraining models on every device, we introduce the Distributed Internet of Learning (DIoL), which enables a model trained on one MCU to be exported as a portable, text-based representation and reused directly on other devices. A two-device prototype demonstrates the feasibility of the "Train Once, Share Everywhere" (TOSE) approach using a real-world appliance case study, where Device A trains the model and Device B performs inference without retraining. Experimental results show consistent anomaly detection behavior, negligible parsing overhead, and identical inference runtimes between standalone and DIoL-based operation. The proposed framework enables scalable, low-cost TinyML deployment across fleets of embedded devices.
Problem

Research questions and friction points this paper is trying to address.

TinyML
anomaly detection
model reuse
resource-constrained devices
distributed learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

TinyML
K-Means anomaly detection
Distributed Internet of Learning
model reuse
microcontroller
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Abdulrahman Albaiz
Department of Computer Science & Engineering, Wright State University, Dayton, Ohio, USA
Fathi Amsaad
Fathi Amsaad
Wright State University
Hardware SecurityIoT SecurityTrusted MicroelectronicsTiny Machine LearningAI Hardware