AdapSNE: Adaptive Fireworks-Optimized and Entropy-Guided Dataset Sampling for Edge DNN Training

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing DNN training on edge devices suffers from excessive reliance on large-scale data and prohibitive computational overhead. Conventional DNN-free sampling methods—e.g., Non-Maximum Suppression (NMS)—exhibit poor performance due to mismatched search strategies for non-monotonic perplexity-based error functions and empirically tuned critical parameters, resulting in abundant outliers, uneven sampling, and low sample representativeness. This paper proposes an adaptive DNN-free dataset sampling framework: first, t-SNE is employed for dimensionality reduction; second, the Fireworks Algorithm (FWA) optimizes parameter search under non-monotonic error constraints to suppress outliers; third, an entropy-guided uniform sampling strategy eliminates empirical parameter dependence. The framework enables high-representativeness subset selection directly on edge devices and integrates a dedicated lightweight accelerator. Experiments demonstrate that our method significantly improves edge DNN training accuracy while maintaining low energy consumption and small hardware area overhead.

Technology Category

Application Category

📝 Abstract
Training deep neural networks (DNNs) directly on edge devices has attracted increasing attention, as it offers promising solutions to challenges such as domain adaptation and privacy preservation. However, conventional DNN training typically requires large-scale datasets, which imposes prohibitive overhead on edge devices-particularly for emerging large language model (LLM) tasks. To address this challenge, a DNN-free method (ie., dataset sampling without DNN), named NMS (Near-Memory Sampling), has been introduced. By first conducting dimensionality reduction of the dataset and then performing exemplar sampling in the reduced space, NMS avoids the architectural bias inherent in DNN-based methods and thus achieves better generalization. However, The state-of-the-art, NMS, suffers from two limitations: (1) The mismatch between the search method and the non-monotonic property of the perplexity error function leads to the emergence of outliers in the reduced representation; (2) Key parameter (ie., target perplexity) is selected empirically, introducing arbitrariness and leading to uneven sampling. These two issues lead to representative bias of examplars, resulting in degraded accuracy. To address these issues, we propose AdapSNE, which integrates an efficient non-monotonic search method-namely, the Fireworks Algorithm (FWA)-to suppress outliers, and employs entropy-guided optimization to enforce uniform sampling, thereby ensuring representative training samples and consequently boosting training accuracy. To cut the edge-side cost arising from the iterative computations of FWA search and entropy-guided optimization, we design an accelerator with custom dataflow and time-multiplexing markedly reducing on-device training energy and area.
Problem

Research questions and friction points this paper is trying to address.

Optimizes dataset sampling for edge DNN training
Reduces outliers in reduced representation space
Ensures uniform sampling through entropy guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fireworks Algorithm for outlier suppression
Entropy-guided optimization for uniform sampling
Custom accelerator for energy-efficient edge training
🔎 Similar Papers
No similar papers found.
B
Boran Zhao
School of Software Engineering
H
Hetian Liu
School of Software Engineering
Z
Zihang Yuan
School of Software Engineering
L
Li Zhu
School of Software Engineering
F
Fan Yang
School of Computer Science and Technology
L
Lina Xie
the First Affiliated Hospital of Xi’an Jiaotong University
T
Tian Xia
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi, China
W
Wenzhe Zhao
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi, China
Pengju Ren
Pengju Ren
Professor, Xi'an Jiaotong University