🤖 AI Summary
This work addresses the challenge of efficiently scheduling multiple deep neural network (DNN) tasks on edge accelerators under unpredictable arrival times, a scenario where existing preemptive approaches suffer from high runtime overhead and reliance on known task arrival patterns. To overcome these limitations, the paper proposes a parallel subgraph isomorphism-based scheduling framework that uniquely integrates multi-particle optimization with the Ullmann algorithm. By introducing probabilistic continuous relaxation, the method eliminates serial dependencies inherent in traditional search procedures. Furthermore, it combines quantized scheduling with a hardware-aware global controller to enable consensus-guided exploration. This approach supports real-time interruption and rescheduling of DNN tasks with unpredictable arrivals, achieving several orders of magnitude reduction in both scheduling latency and energy consumption on edge devices.
📝 Abstract
The growing demand for multi-DNN workloads with unpredictable task arrival times has highlighted the need for interruptible scheduling on edge accelerators. However, existing preemptive frameworks typically assume known task arrival times and rely on CPU-based offline scheduling, which incurs heavy runtime overhead and struggles to handle unpredictable task arrivals. Even worse, prior studies have shown that multi-DNN scheduling requires solving an NP-hard subgraph isomorphism problem on large directed acyclic graphs within limited time, which is extremely challenging. To tackle this, we propose IMMSched, a parallel subgraph isomorphism method that combines Multi-Particle Optimization with the Ullmann algorithm based on a probabilistic continuous-relaxation scheme, eliminating the serial data dependencies of previous works. Finally, a quantized scheduling scheme and a global controller in the hardware architecture further combine multi-particle results for consensus-guided exploration. Evaluations demonstrate that IMMSched achieves orders-of-magnitude reductions in scheduling latency and energy consumption, enabling real-time execution of unpredictable DNN tasks on edge accelerators.