๐ค AI Summary
This paper addresses the joint deployment optimization of deep neural networks (DNNs) for edge collaborative inference under resource constraints and privacy requirements, aiming to minimize long-term average inference latency. We propose the first integrated framework combining Lyapunov optimization, coalition game modeling, and greedy heuristics: Lyapunov optimization enables dynamic resourceโlatency trade-offs; coalition game theory jointly models server association and model partitioning decisions; and implicit differential privacy constraints enforce end-to-end privacy budget adherence. The framework simultaneously achieves low latency, strong privacy guarantees, and scalability in dynamic edge environments. Simulation results demonstrate a 23.6%โ38.1% reduction in inference latency compared to baseline methods, strict compliance with the prescribed privacy budget, and robust performance under highly variable workloads.
๐ Abstract
Edge inference (EI) is a key solution to address the growing challenges of delayed response times, limited scalability, and privacy concerns in cloud-based Deep Neural Network (DNN) inference. However, deploying DNN models on resource-constrained edge devices faces more severe challenges, such as model storage limitations, dynamic service requests, and privacy risks. This paper proposes a novel framework for privacy-aware joint DNN model deployment and partition optimization to minimize long-term average inference delay under resource and privacy constraints. Specifically, the problem is formulated as a complex optimization problem considering model deployment, user-server association, and model partition strategies. To handle the NP-hardness and future uncertainties, a Lyapunov-based approach is introduced to transform the long-term optimization into a single-time-slot problem, ensuring system performance. Additionally, a coalition formation game model is proposed for edge server association, and a greedy-based algorithm is developed for model deployment within each coalition to efficiently solve the problem. Extensive simulations show that the proposed algorithms effectively reduce inference delay while satisfying privacy constraints, outperforming baseline approaches in various scenarios.