Token Sugar: Making Source Code Sweeter for LLMs through Token-Efficient Shorthand

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) achieve strong performance on code tasks, yet their practical deployment is hindered by high inference latency and memory overhead—primarily due to token inflation from syntactic redundancies (e.g., formatting tokens, boilerplate code). To address this, we propose *Token Sugar*, a semantics-preserving, reversible token compression framework for source code. It identifies frequent, semantically equivalent code patterns and constructs bidirectional shorthand mappings, enabling lossless compression of both input and output token sequences. Unlike syntax-level optimizations, Token Sugar operates at the semantic level and complements existing compiler- or parser-based techniques, integrating seamlessly into end-to-end pretraining pipelines. Experiments demonstrate up to 15.1% reduction in source code token count and up to 11.2% fewer tokens generated during inference, with no degradation in Pass@1 accuracy relative to baselines. To our knowledge, Token Sugar is the first framework enabling reversible, semantics-aware token compression fully compatible with full-cycle model training and inference.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown exceptional performance in code generation and understanding tasks, yet their high computational costs hinder broader adoption. One important factor is the inherent verbosity of programming languages, such as unnecessary formatting elements and lengthy boilerplate code. This leads to inflated token counts in both input and generated outputs, which increases inference costs and slows down the generation process. Prior work improves this through simplifying programming language grammar, reducing token usage across both code understanding and generation tasks. However, it is confined to syntactic transformations, leaving significant opportunities for token reduction unrealized at the semantic level. In this work, we propose Token Sugar, a concept that replaces frequent and verbose code patterns with reversible, token-efficient shorthand in the source code. To realize this concept in practice, we designed a systematic solution that mines high-frequency, token-heavy patterns from a code corpus, maps each to a unique shorthand, and integrates them into LLM pretraining via code transformation. With this solution, we obtain 799 (code pattern, shorthand) pairs, which can reduce up to 15.1% token count in the source code and is complementary to existing syntax-focused methods. We further trained three widely used LLMs on Token Sugar-augmented data. Experimental results show that these models not only achieve significant token savings (up to 11.2% reduction) during generation but also maintain near-identical Pass@1 scores compared to baselines trained on unprocessed code.
Problem

Research questions and friction points this paper is trying to address.

Reduces token count in source code for LLMs
Replaces verbose code patterns with efficient shorthand
Maintains performance while lowering computational costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces verbose code patterns with reversible shorthand
Mines high-frequency token-heavy patterns from code corpus
Integrates shorthand into LLM pretraining via code transformation
🔎 Similar Papers
No similar papers found.