Soft Tokens, Hard Truths

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing continuous token chain-of-thought (CoT) methods suffer from training difficulties: either they support only posterior continuous inference on discrete models, or they rely on costly discrete-to-continuous distillation—severely limiting inference length and scalability. This paper proposes the first distillation-free, end-to-end trainable reinforcement learning framework for continuous CoT. It introduces a soft-token mechanism with embedded noise to model hundreds of reasoning steps continuously. The method enables efficient training on 8B-parameter models (e.g., Llama, Qwen) and achieves significant improvements in pass@32 on mathematical reasoning benchmarks. At inference time, it seamlessly reverts to standard discrete token generation, ensuring deployment compatibility and strong cross-domain generalization. Key contributions include: (1) a novel joint soft-token + RL training paradigm; (2) a scalable, distillation-free architecture; and (3) unprecedented capability for long-horizon continuous reasoning.

Technology Category

Application Category

📝 Abstract
The use of continuous instead of discrete tokens during the Chain-of-Thought (CoT) phase of reasoning LLMs has garnered attention recently, based on the intuition that a continuous mixture of discrete tokens could simulate a superposition of several reasoning paths simultaneously. Theoretical results have formally proven that continuous tokens have much greater expressivity and can solve specific problems more efficiently. However, practical use of continuous tokens has been limited by strong training difficulties: previous works either just use continuous tokens at inference time on a pre-trained discrete-token model, or must distill the continuous CoT from ground-truth discrete CoTs and face computational costs that limit the CoT to very few tokens. This is the first work introducing a scalable method to learn continuous CoTs via reinforcement learning (RL), without distilling from reference discrete CoTs. We use"soft"tokens: mixtures of tokens together with noise on the input embedding to provide RL exploration. Computational overhead is minimal, enabling us to learn continuous CoTs with hundreds of tokens. On math reasoning benchmarks with Llama and Qwen models up to 8B, training with continuous CoTs match discrete-token CoTs for pass@1 and surpass them for pass@32, showing greater CoT diversity. In systematic comparisons, the best-performing scenario is to train with continuous CoT tokens then use discrete tokens for inference, meaning the"soft"models can be deployed in a standard way. Finally, we show continuous CoT RL training better preserves the predictions of the base model on out-of-domain tasks, thus providing a softer touch to the base model.
Problem

Research questions and friction points this paper is trying to address.

Training continuous tokens faces strong training difficulties and computational limitations
Previous methods require distillation from discrete tokens or inference-only usage
Scalable learning of continuous CoTs without reference discrete tokens is needed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous CoT learning via reinforcement learning
Soft tokens with embedding noise for exploration
Minimal overhead enabling long reasoning chains
🔎 Similar Papers
No similar papers found.