To Offload or Not To Offload: Model-driven Comparison of Edge-native and On-device Processing

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic decision-making between local processing and edge offloading in edge computing remains challenging due to hardware heterogeneity (e.g., diverse accelerators), time-varying network conditions, and multi-tenant interference. Method: This paper proposes a unified performance model integrating M/G/1 and M/M/m queuing theories to jointly characterize these factors. It further introduces an adaptive offloading resource manager enabling low-latency, online offloading decisions. Contribution/Results: Evaluated on cross-platform AI workloads and real-world edge environments, the model achieves a mean absolute percentage error of only 2.2%. Compared to static offloading policies, it reduces end-to-end latency by up to 37%, significantly improving service efficacy and resource utilization in dynamic edge scenarios.

Technology Category

Application Category

📝 Abstract
Computational offloading is a promising approach for overcoming resource constraints on client devices by moving some or all of an application's computations to remote servers. With the advent of specialized hardware accelerators, client devices are now able to perform fast local processing of specific tasks, such as machine learning inference, reducing the need for offloading computations. However, edge servers with accelerators also offer faster processing for offloaded tasks than was previously possible. In this paper, we present an analytic and experimental comparison of on-device processing and edge offloading for a range of accelerator, network, and application workload scenarios, with the goal of understanding when to use local on-device processing and when to offload computations. We present models that leverage analytical queuing results to capture the effects of dynamic factors such as the performance gap between the device and edge server, network variability, server load, and multi-tenancy on the edge server. We experimentally demonstrate the accuracy of our models for a range of hardware and application scenarios and show that our models achieve a mean absolute percentage error of 2.2% compared to observed latencies. We use our models to develop an adaptive resource manager for intelligent offloading and show its efficacy in the presence of variable network conditions and dynamic multi-tenant edge settings.
Problem

Research questions and friction points this paper is trying to address.

Compare on-device vs edge offloading for accelerator scenarios
Model dynamic factors like network variability and server load
Develop adaptive resource manager for intelligent offloading decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-driven comparison of edge and on-device processing
Analytical queuing models for dynamic performance factors
Adaptive resource manager for intelligent offloading
🔎 Similar Papers
No similar papers found.