🤖 AI Summary
To address the excessive trajectory length and high computational cost in chain-of-thought (CoT) reasoning of large language models (LLMs), this paper proposes a hybrid representation method that jointly models raw text tokens and discrete latent tokens generated by a vector-quantized variational autoencoder (VQ-VAE), aligning their embedding spaces via randomized shuffling during training. This approach is the first to enable LLMs to rapidly adapt to novel latent vocabulary under either zero-initialized or efficient fine-tuning regimes, while expanding the token vocabulary to incorporate structured reasoning signals. Evaluated on benchmarks including Keys-Finding Maze, logical reasoning, and mathematical reasoning, the method reduces average input length by 32%, lowers computational overhead, and improves reasoning accuracy by 2.1–4.7 percentage points—demonstrating simultaneous gains in generalization and inference efficiency.
📝 Abstract
Large Language Models (LLMs) excel at reasoning and planning when trained on chainof-thought (CoT) data, where the step-by-step thought process is explicitly outlined by text tokens. However, this results in lengthy inputs where many words support textual coherence rather than core reasoning information, and processing these inputs consumes substantial computation resources. In this work, we propose a hybrid representation of the reasoning process, where we partially abstract away the initial reasoning steps using latent discrete tokens generated by VQ-VAE, significantly reducing the length of reasoning traces. We explore the use of latent trace abstractions in two scenarios: 1) training the model from scratch for the Keys-Finding Maze problem, 2) fine-tuning LLMs on this hybrid data with an extended vocabulary including unseen latent tokens, for both logical and mathematical reasoning problems. To facilitate effective learning, we introduce a simple training procedure that randomly mixes latent and text tokens, which enables fast adaptation to new latent tokens. Our approach consistently outperforms the baselines methods in various benchmarks.