🤖 AI Summary
To address high energy consumption, elevated task failure rates, and resource underutilization in edge AI-as-a-Service, this paper proposes a Transformer-based DNN inference task offloading mechanism. We introduce the Transformer architecture—novel in edge task offloading—for modeling task dependencies encoded as directed acyclic graphs (DAGs), and formulate a historical-data-driven supervised learning framework. The framework jointly optimizes energy consumption and end-to-end latency under strict latency and edge server capacity constraints. Experimental results demonstrate that, under resource-constrained conditions, our approach reduces mobile-device energy consumption by 18% and significantly lowers task failure rates compared to baseline methods, while maintaining near-optimal decision quality. The core contribution lies in the first application of Transformers to edge offloading decision-making, enabling synergistic improvements in energy efficiency and service reliability.
📝 Abstract
Artificial intelligence (AI) has become a pivotal force in reshaping next generation mobile networks. Edge computing holds promise in enabling AI as a service (AIaaS) for prompt decision-making by offloading deep neural network (DNN) inference tasks to the edge. However, current methodologies exhibit limitations in efficiently offloading the tasks, leading to possible resource underutilization and waste of mobile devices' energy. To tackle these issues, in this paper, we study AIaaS at the edge and propose an efficient offloading mechanism for renowned DNN architectures like ResNet and VGG16. We model the inference tasks as directed acyclic graphs and formulate a problem that aims to minimize the devices' energy consumption while adhering to their latency requirements and accounting for servers' capacity. To effectively solve this problem, we utilize a transformer DNN architecture. By training on historical data, we obtain a feasible and near-optimal solution to the problem. Our findings reveal that the proposed transformer model improves energy efficiency compared to established baseline schemes. Notably, when edge computing resources are limited, our model exhibits an 18% reduction in energy consumption and significantly decreases task failure compared to existing works.