🤖 AI Summary
This study addresses the energy–latency–reliability trade-off in data compression and transmission for resource-constrained wireless devices in edge computing. We propose an application-driven end-to-end latency budgeting mechanism, departing from conventional hard real-time constraints. A joint optimization model is formulated, with compression ratio and device processing speed as key decision variables, to characterize their nonlinear interdependencies and compute the Pareto-optimal frontier. Theoretical analysis and experiments demonstrate that modest relaxation of end-to-end latency yields exponential reductions in energy consumption—minor latency increases enable substantial energy savings. The proposed framework provides a quantifiable, configurable design paradigm for low-power, adaptive edge communication, while rigorously guaranteeing reliability requirements.
📝 Abstract
With the advent of edge computing, data generated by end devices can be pre-processed before transmission, possibly saving transmission time and energy. On the other hand, data processing itself incurs latency and energy consumption, depending on the complexity of the computing operations and the speed of the processor. The energy-latency-reliability profile resulting from the concatenation of pre-processing operations (specifically, data compression) and data transmission is particularly relevant in wireless communication services, whose requirements may change dramatically with the application domain. In this paper, we study this multi-dimensional optimization problem, introducing a simple model to investigate the tradeoff among end-to-end latency, reliability, and energy consumption when considering compression and communication operations in a constrained wireless device. We then study the Pareto fronts of the energy-latency trade-off, considering data compression ratio and device processing speed as key design variables. Our results show that the energy costs grows exponentially with the reduction of the end-to-end latency, so that considerable energy saving can be obtained by slightly relaxing the latency requirements of applications. These findings challenge conventional rigid communication latency targets, advocating instead for application-specific end-to-end latency budgets that account for computational and transmission overhead.