Queueing-Aware Optimization of Reasoning Tokens for Accuracy-Latency Trade-offs in LLM Servers

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing accuracy and latency for heterogeneous queries on large language model (LLM) servers under a constrained inference token budget. The authors model the query stream as an M/G/1 queueing system, assigning a fixed number of tokens to each task class. They formulate an optimization problem that trades off weighted average accuracy against average system time, subject to stability and total token budget constraints. By innovatively integrating queueing theory with LLM inference resource allocation, they prove the objective function is strictly concave within the stability region and develop a globally convergent projected gradient method based on coupled projection fixed-point iterations. An integer rounding scheme with a provable performance loss bound is also devised. Simulations demonstrate that the proposed approach significantly improves the overall accuracy–latency trade-off, with negligible performance degradation from integer rounding.

Technology Category

Application Category

📝 Abstract
We consider a single large language model (LLM) server that serves a heterogeneous stream of queries belonging to $N$ distinct task types. Queries arrive according to a Poisson process, and each type occurs with a known prior probability. For each task type, the server allocates a fixed number of internal thinking tokens, which determines the computational effort devoted to that query. The token allocation induces an accuracy-latency trade-off: the service time follows an approximately affine function of the allocated tokens, while the probability of a correct response exhibits diminishing returns. Under a first-in, first-out (FIFO) service discipline, the system operates as an $M/G/1$ queue, and the mean system time depends on the first and second moments of the resulting service-time distribution. We formulate a constrained optimization problem that maximizes a weighted average accuracy objective penalized by the mean system time, subject to architectural token-budget constraints and queue-stability conditions. The objective function is shown to be strictly concave over the stability region, which ensures existence and uniqueness of the optimal token allocation. The first-order optimality conditions yield a coupled projected fixed-point characterization of the optimum, together with an iterative solution and an explicit sufficient condition for contraction. Moreover, a projected gradient method with a computable global step-size bound is developed to guarantee convergence beyond the contractive regime. Finally, integer-valued token allocations are attained via rounding of the continuous solution, and the resulting performance loss is evaluated in simulation results.
Problem

Research questions and friction points this paper is trying to address.

LLM server
accuracy-latency trade-off
token allocation
queueing theory
M/G/1 queue
Innovation

Methods, ideas, or system contributions that make the work stand out.

queueing-aware optimization
reasoning tokens
accuracy-latency trade-off
M/G/1 queue
projected gradient method
🔎 Similar Papers
No similar papers found.
E
Emre Ozbas
Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Türkiye
Melih Bastopcu
Melih Bastopcu
Assistant Professor, Bilkent University
wireless communicationsage of informationinformation theorynetworks