From Tiny Machine Learning to Tiny Deep Learning: A Survey

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of deploying deep learning models on resource-constrained edge devices, this paper introduces the “TinyDL” paradigm, systematically charting the evolution from TinyML to TinyDL. Methodologically, it integrates hardware-aware co-design, model compression (including quantization and pruning), neural architecture search (NAS), and domain-specific compilers and AutoML toolchains—enabling efficient execution across heterogeneous platforms such as MCUs and neural accelerators. Key innovations include in-memory computing adaptation, a federated TinyDL framework, and edge-native lightweight foundation models. The work demonstrates practical impact across computer vision, speech recognition, and healthcare monitoring applications. It establishes a comprehensive TinyDL technology map spanning algorithms, tools, hardware, and application scenarios—providing both theoretical foundations and actionable implementation pathways for scalable edge AI deployment.

Technology Category

Application Category

📝 Abstract
The rapid growth of edge devices has driven the demand for deploying artificial intelligence (AI) at the edge, giving rise to Tiny Machine Learning (TinyML) and its evolving counterpart, Tiny Deep Learning (TinyDL). While TinyML initially focused on enabling simple inference tasks on microcontrollers, the emergence of TinyDL marks a paradigm shift toward deploying deep learning models on severely resource-constrained hardware. This survey presents a comprehensive overview of the transition from TinyML to TinyDL, encompassing architectural innovations, hardware platforms, model optimization techniques, and software toolchains. We analyze state-of-the-art methods in quantization, pruning, and neural architecture search (NAS), and examine hardware trends from MCUs to dedicated neural accelerators. Furthermore, we categorize software deployment frameworks, compilers, and AutoML tools enabling practical on-device learning. Applications across domains such as computer vision, audio recognition, healthcare, and industrial monitoring are reviewed to illustrate the real-world impact of TinyDL. Finally, we identify emerging directions including neuromorphic computing, federated TinyDL, edge-native foundation models, and domain-specific co-design approaches. This survey aims to serve as a foundational resource for researchers and practitioners, offering a holistic view of the ecosystem and laying the groundwork for future advancements in edge AI.
Problem

Research questions and friction points this paper is trying to address.

Surveying transition from TinyML to TinyDL for edge AI deployment
Analyzing optimization techniques like quantization, pruning, and NAS
Reviewing applications in vision, audio, healthcare, and industrial monitoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transition from TinyML to TinyDL on edge devices
Optimization via quantization, pruning, and NAS
Software frameworks enabling on-device learning