🤖 AI Summary
This work exposes a security vulnerability in token-based pricing for large language model (LLM) cloud services: providers can strategically overreport token counts—exploiting users’ inability to verify actual token consumption—leading to opaque overcharging, unconstrained by current transparency standards. To address this, the authors propose the first cryptographically verifiable, incentive-compatible per-character pricing mechanism, eliminating the economic incentive for strategic inflation. They further design efficient heuristic attack algorithms demonstrating that mainstream token-counting schemes are practically susceptible to abuse. Experiments across Llama, Gemma, and Mixtral models using the LMSYS prompt suite reveal up to 3.2× hidden overcharging under existing mechanisms. In contrast, the proposed mechanism provably prevents strategic overreporting while maintaining theoretical soundness and practical efficiency.
📝 Abstract
State-of-the-art large language models require specialized hardware and substantial energy to operate. As a consequence, cloud-based services that provide access to large language models have become very popular. In these services, the price users pay for an output provided by a model depends on the number of tokens the model uses to generate it -- they pay a fixed price per token. In this work, we show that this pricing mechanism creates a financial incentive for providers to strategize and misreport the (number of) tokens a model used to generate an output, and users cannot prove, or even know, whether a provider is overcharging them. However, we also show that, if an unfaithful provider is obliged to be transparent about the generative process used by the model, misreporting optimally without raising suspicion is hard. Nevertheless, as a proof-of-concept, we introduce an efficient heuristic algorithm that allows providers to significantly overcharge users without raising suspicion, highlighting the vulnerability of users under the current pay-per-token pricing mechanism. Further, to completely eliminate the financial incentive to strategize, we introduce a simple incentive-compatible token pricing mechanism. Under this mechanism, the price users pay for an output provided by a model depends on the number of characters of the output -- they pay a fixed price per character. Along the way, to illustrate and complement our theoretical results, we conduct experiments with several large language models from the $ exttt{Llama}$, $ exttt{Gemma}$ and $ exttt{Ministral}$ families, and input prompts from the LMSYS Chatbot Arena platform.