Latent Thoughts Tuning: Bridging Context and Reasoning with Fused Information in Latent Tokens

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Latent Thoughts Tuning, a novel framework that addresses feature collapse and training instability in existing continuous latent-space reasoning methods, which often suffer from redundant hidden state reuse or reliance on auxiliary models. The approach innovatively integrates lexical semantic signals into latent state construction and employs a Context-Prediction-Fusion mechanism to effectively combine contextual and predictive information. Furthermore, it leverages a three-stage curriculum learning strategy to enable dynamic switching between implicit and explicit reasoning modes. This design mitigates distributional mismatch and alignment issues, yielding significant improvements in both accuracy and robustness across multiple benchmarks, outperforming current implicit reasoning approaches.

Technology Category

Application Category

📝 Abstract
While explicit Chain-of-Thought (CoT) equips Large Language Models (LLMs) with strong reasoning capabilities, it requires models to verbalize every intermediate step in text tokens, constraining the model thoughts to the discrete vocabulary space. Recently, reasoning in continuous latent space has emerged as a promising alternative, enabling more robust inference and flexible computation beyond discrete token constraints. However, current latent paradigms often suffer from feature collapse and instability, stemming from distribution mismatches when recurrently using hidden states as the input embeddings, or alignment issues when relying on assistant models. To address this, we propose Latent Thoughts Tuning (LT-Tuning), a framework that redefines how latent thoughts are constructed and deployed. Instead of relying solely on raw hidden states, our method introduces a Context-Prediction-Fusion mechanism that jointly leveraging contextual hidden states and predictive semantic guidance from the vocabulary embedding space. Combined with a progressive three-stage curriculum learning pipeline, LT-Tuning also enables dynamically switching between latent and explicit thinking modes. Experiments demonstrate that our method outperforms existing latent reasoning baselines, effectively mitigating feature collapse and achieving robust reasoning accuracy.
Problem

Research questions and friction points this paper is trying to address.

latent reasoning
feature collapse
distribution mismatch
alignment issues
continuous latent space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Thoughts Tuning
Context-Prediction-Fusion
latent reasoning
feature collapse mitigation
curriculum learning
🔎 Similar Papers
No similar papers found.