Lossless Token Sequence Compression via Meta-Tokens

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high encoding overhead caused by redundant input token sequences in large language models (LLMs), this paper proposes a task-agnostic, fully reversible, lossless token compression method. Inspired by LZ77, the method employs a sliding-window matching and reference encoding mechanism built upon meta-tokens, ensuring zero syntactic and semantic loss. It achieves average compression ratios of 27% for long sequences and 18% for short sequences, reducing Transformer encoding computation by 47% and 33%, respectively. The method is architecture-agnostic—compatible with any Transformer-based model without fine-tuning or retraining. On tasks requiring precise semantic preservation, it matches the performance of uncompressed inputs, substantially outperforming existing lossy compression approaches; compression and decompression overhead is negligible. This work presents the first truly lossless, reversible, and generalizable token-level compression technique for LLM inputs.

Technology Category

Application Category

📝 Abstract
Existing work on prompt compression for Large Language Models (LLM) focuses on lossy methods that try to maximize the retention of semantic information that is relevant to downstream tasks while significantly reducing the sequence length. In this paper, we introduce a task-agnostic lossless compression technique similar to LZ77 that makes it possible to reduce the input token sequence length on average by 27% and 18% for the two evaluation tasks explored here. Given that we use transformer-based LLMs, this equates to 47% and 33% less encoding computation, respectively, due to the quadratic nature of attention. The token sequence transformation is trivial to reverse and highlights that no semantic information is lost in the process. We evaluate our proposed approach on two tasks that require strict preservation of semantics/syntax and demonstrate that existing lossy compression methods perform poorly in this setting. We find that our lossless compression technique produces only a small gap in performance compared to using the uncompressed input and posit that larger models and an expanded computing budget would likely erase the gap entirely.
Problem

Research questions and friction points this paper is trying to address.

Develops lossless token compression for LLMs
Reduces input token length by 18-27%
Preserves semantics unlike lossy methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-agnostic lossless token sequence compression
LZ77-inspired meta-token transformation method
Reduces input token length by 27-18%