🤖 AI Summary
This work addresses the limitations of existing end-to-end SLA decomposition methods, which rely on computationally intensive iterative optimization and thus struggle to meet the stringent real-time and scalability demands of 6G network slicing. To overcome this challenge, the authors propose Casformer—a cascaded Transformer architecture that uniquely integrates amortized optimization with domain-aware neural networks. By leveraging an intra-domain historical feedback encoder and an inter-domain dependency aggregator, Casformer enables feedforward, non-iterative SLA decomposition for the first time. Extensive evaluations demonstrate that the proposed method significantly outperforms state-of-the-art approaches across diverse network conditions, achieving superior decomposition quality, robustness, and scalability while maintaining low runtime complexity—making it well-suited for real-time SLA management in 5G and beyond.
📝 Abstract
The evolution toward 6G networks increasingly relies on network slicing to provide tailored, End-to-End (E2E) logical networks over shared physical infrastructures. A critical challenge is effectively decomposing E2E Service Level Agreements (SLAs) into domain-specific SLAs, which current solutions handle through computationally intensive, iterative optimization processes that incur substantial latency and complexity. To address this, we introduce Casformer, a cascaded Transformer architecture designed for fast, optimization-free SLA decomposition. Casformer leverages historical domain feedback encoded through domain-specific Transformer encoders in its first layer, and integrates cross-domain dependencies using a Transformer-based aggregator in its second layer. The model is trained under a learning paradigm inspired by Domain-Informed Neural Networks (DINNs), incorporating risk-informed modeling and amortized optimization to learn a stable, forward-only SLA decomposition policy. Extensive evaluations demonstrate that Casformer achieves improved SLA decomposition quality against state-of-the-art optimization-based frameworks, while exhibiting enhanced scalability and robustness under volatile and noisy network conditions. In addition, its forward-only design reduces runtime complexity and simplifies deployment and maintenance. These insights reveal the potential of combining amortized optimization with Transformer-based sequence modeling to advance network automation, providing a scalable and efficient solution suitable for real-time SLA management in advanced 5G-and-beyond network environments.