Pricing Online LLM Services with Data-Calibrated Stackelberg Routing Game

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time, scalable, profit-optimal pricing for online large language model (LLM) services in large-scale dynamic markets remains challenging due to heterogeneous user quality-of-service (QoS) preferences and strategic provider pricing. Method: We propose PriLLM—a data-calibrated Stackelberg game-theoretic routing framework that jointly models service provider pricing strategies and user equilibrium choices under multi-dimensional QoS preferences. To enhance scalability, we introduce a deep aggregation network that learns compact, interpretable provider representations while preserving user-side Nash equilibrium constraints and pricing transparency. Results: Evaluated on real-world data, PriLLM achieves over 95% of the theoretical maximum profit using less than 5% of the computational time required by the optimal solution—demonstrating high practicality, strong scalability, and cross-market generalizability.

Technology Category

Application Category

📝 Abstract
The proliferation of Large Language Models (LLMs) has established LLM routing as a standard service delivery mechanism, where users select models based on cost, Quality of Service (QoS), among other things. However, optimal pricing in LLM routing platforms requires precise modeling for dynamic service markets, and solving this problem in real time at scale is computationally intractable. In this paper, we propose PriLLM, a novel practical and scalable solution for real-time dynamic pricing in competitive LLM routing. PriLLM models the service market as a Stackelberg game, where providers set prices and users select services based on multiple criteria. To capture real-world market dynamics, we incorporate both objective factors (eg~cost, QoS) and subjective user preferences into the model. For scalability, we employ a deep aggregation network to learn provider abstraction that preserve user-side equilibrium behavior across pricing strategies. Moreover, PriLLM offers interpretability by explaining its pricing decisions. Empirical evaluation on real-world data shows that PriLLM achieves over 95% of the optimal profit while only requiring less than 5% of the optimal solution's computation time.
Problem

Research questions and friction points this paper is trying to address.

Optimizing real-time dynamic pricing for competitive LLM routing platforms
Modeling service markets with objective factors and user preferences
Achieving scalable computation while preserving near-optimal profit performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Stackelberg game for dynamic pricing
Learns provider abstraction via deep aggregation
Combines objective factors and user preferences
🔎 Similar Papers
No similar papers found.
Z
Zhendong Guo
School of Computer Science and Engineering, Southeast University
W
Wenchao Bai
School of Computer Science and Engineering, Southeast University
Jiahui Jin
Jiahui Jin
Southeast University
Cloud ComputingBig DataGraph DatabaseTask Scheduling