LMTE: Putting the"Reasoning"into WAN Traffic Engineering with Language Models

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional WAN traffic engineering (TE) solvers in handling large-scale dynamic networks and the poor generalization of existing deep learning approaches under unseen traffic patterns or topologies. It presents the first integration of large language models (LLMs) into TE, leveraging the LLM as a universal planner by modeling TE as a sequential decision-making process and exploiting its parallel reasoning capabilities to construct an efficient and lightweight framework. The proposed method combines multimodal alignment with a lightweight configuration generation mechanism, achieving state-of-the-art performance across five real-world datasets: it reduces maximum link utilization (MLU) by up to 15%, exhibits less than 5% performance degradation under highly dynamic traffic and link failures, and accelerates solution speed by 10–100× compared to conventional methods.

Technology Category

Application Category

📝 Abstract
The rapid expansion of modern wide-area networks (WANs) has made traffic engineering (TE) increasingly challenging, as traditional solvers struggle to keep pace. Although existing offline ML-driven approaches accelerate TE optimization with deep neural networks (DNNs), they often lack sufficient expressiveness and generalization on unseen traffic patterns or topologies, limiting their practicality. Inspired by the success of large language models (LMs), for the first time, this paper investigates their potential as general-purpose traffic planners. Our contributions are two-fold: (i) Theoretically, we show that pre-trained LMs can simulate the sequential decision processes underlying TE and, crucially, exhibit parallel reasoning capabilities, making them well-suited for the task; (ii) Practically, we present LMTE, a novel LM-driven TE framework that embraces these insights through efficient multimodal alignment and lightweight configuration generation, all while preserving the model's original abilities. Extensive experiments demonstrate that fold matches top-tier performance on five datasets, achieving up to 15\% better maximum link utilization (MLU) and consistently lower performance degradation across diverse scenarios, e.g., less than 5\% with high traffic dynamics and link failures. Moreover, it achieves 10 to 100 times speedups over traditional TE solvers. To aid future works, our codebase is available at https://github.com/Y-debug-sys/LMTE.
Problem

Research questions and friction points this paper is trying to address.

Traffic Engineering
Wide-Area Networks
Generalization
Scalability
Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Traffic Engineering
Parallel Reasoning
Multimodal Alignment
WAN Optimization
🔎 Similar Papers
No similar papers found.