Distributed Hierarchical Machine Learning for Joint Resource Allocation and Slice Selection in In-Network Edge Systems

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of joint wireless/computing resource management and slice selection in dynamic edge environments under high load—particularly in metaverse applications—this paper proposes a sliced-edge architecture integrating computational intranets with multi-access edge computing (MEC). We design a distributed hierarchical DeepSets-S model featuring relaxation-aware normalization and task-specific decoders to ensure permutation equivariance over variable-length device sets. Our approach combines decomposition of mixed-integer nonlinear programming (MINLP), offline optimal-data-driven training, and COIN/MEC co-scheduling. Experiments demonstrate that subtask accuracy reaches ≥95%, offloading accuracy improves to 88.24%, inference latency decreases by 86.1%, system cost approaches the global optimum (deviation ≤6.1%), and resource utilization significantly outperforms baseline methods.

Technology Category

Application Category

📝 Abstract
The Metaverse promises immersive, real-time experiences; however, meeting its stringent latency and resource demands remains a major challenge. Conventional optimization techniques struggle to respond effectively under dynamic edge conditions and high user loads. In this study, we explore a slice-enabled in-network edge architecture that combines computing-in-the-network (COIN) with multi-access edge computing (MEC). In addition, we formulate the joint problem of wireless and computing resource management with optimal slice selection as a mixed-integer nonlinear program (MINLP). Because solving this model online is computationally intensive, we decompose it into three sub-problems (SP1) intra-slice allocation, (SP2) inter-slice allocation, and (SP3) offloading decision and train a distributed hierarchical DeepSets-based model (DeepSets-S) on optimal solutions obtained offline. In the proposed model, we design a slack-aware normalization mechanism for a shared encoder and task-specific decoders, ensuring permutation equivariance over variable-size wireless device (WD) sets. The learned system produces near-optimal allocations with low inference time and maintains permutation equivariance over variable-size device sets. Our experimental results show that DeepSets-S attains high tolerance-based accuracies on SP1/SP2 (Acc1 = 95.26% and 95.67%) and improves multiclass offloading accuracy on SP3 (Acc = 0.7486; binary local/offload Acc = 0.8824). Compared to exact solvers, the proposed approach reduces the execution time by 86.1%, while closely tracking the optimal system cost (within 6.1% in representative regimes). Compared with baseline models, DeepSets-S consistently achieves higher cost ratios and better utilization across COIN/MEC resources.
Problem

Research questions and friction points this paper is trying to address.

Optimizing joint wireless and computing resource allocation with slice selection
Reducing computational complexity of MINLP problems in dynamic edge environments
Achieving low-latency resource management for Metaverse applications at network edge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed hierarchical DeepSets model for resource allocation
Decomposed MINLP into three sub-problems for optimization
Slack-aware normalization with shared encoder and decoders
🔎 Similar Papers
No similar papers found.
S
Sulaiman Muhammad Rashid
Department of Intelligent Electronics and Computer Engineering, Chonnam National University, Gwangju, South Korea
I
Ibrahim Aliyu
Department of Intelligent Electronics and Computer Engineering, Chonnam National University, Gwangju, South Korea
J
Jaehyung Park
Department of Intelligent Electronics and Computer Engineering, Chonnam National University, Gwangju, South Korea
Jinsul Kim
Jinsul Kim
Professor of Computer Science and Engineering, Chonnam National University
NetworkCloud ComputingAIBig data