🤖 AI Summary
This paper addresses the joint routing and scheduling problem in queueing networks with unknown transmission costs: at each time slot, the controller observes only noisy cost feedback from selected links and must simultaneously ensure queue stability and minimize long-term expected cost. The problem poses dual challenges—online learning (balancing exploration and exploitation) and network control (co-optimizing throughput and cost)—rendering conventional bandit methods ineffective due to their neglect of queueing dynamics. To bridge this gap, we propose the first online learning algorithm that integrates the Lyapunov drift-penalty framework with optimistic cost estimation, unifying stability constraints and cost learning within a single design. We establish a regret bound of $O(sqrt{T} log T)$, guaranteeing sublinear convergence. Simulations demonstrate that our algorithm significantly outperforms baseline methods in both steady-state performance and convergence speed.
📝 Abstract
We consider the problem of joint routing and scheduling in queueing networks, where the edge transmission costs are unknown. At each time-slot, the network controller receives noisy observations of transmission costs only for those edges it selects for transmission. The network controller's objective is to make routing and scheduling decisions so that the total expected cost is minimized. This problem exhibits an exploration-exploitation trade-off, however, previous bandit-style solutions cannot be directly applied to this problem due to the queueing dynamics. In order to ensure network stability, the network controller needs to optimize throughput and cost simultaneously. We show that the best achievable cost is lower bounded by the solution to a static optimization problem, and develop a network control policy using techniques from Lyapunov drift-plus-penalty optimization and multi-arm bandits. We show that the policy achieves a sub-linear regret of order $O(sqrt{T}log T)$, as compared to the best policy that has complete knowledge of arrivals and costs. Finally, we evaluate the proposed policy using simulations and show that its regret is indeed sub-linear.