SemCoT: Accelerating Chain-of-Thought Reasoning through Semantically-Aligned Implicit Tokens

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing implicit chain-of-thought (CoT) methods suffer from two key bottlenecks: semantic misalignment between implicit and explicit reasoning, and high per-step generation latency. This paper proposes the first joint optimization of semantic fidelity and inference speed for implicit reasoning. We introduce a contrastive-learning-based semantic evaluation mechanism that leverages a contrastively trained sentence encoder to quantify semantic consistency between implicit representations and explicit CoT traces. Additionally, we design a lightweight knowledge distillation framework that compresses implicit reasoning length while preserving logical coherence. Our approach integrates implicit representation learning, efficient fine-tuning, and model distillation. Experiments across multiple benchmarks demonstrate that our method significantly outperforms state-of-the-art implicit reasoning approaches: it maintains or improves accuracy while reducing average inference latency by 32.7%, achieving breakthroughs in both efficiency and semantic consistency.

Technology Category

Application Category

📝 Abstract
The verbosity of Chain-of-Thought (CoT) reasoning hinders its mass deployment in efficiency-critical applications. Recently, implicit CoT approaches have emerged, which encode reasoning steps within LLM's hidden embeddings (termed ``implicit reasoning'') rather than explicit tokens. This approach accelerates CoT by reducing the reasoning length and bypassing some LLM components. However, existing implicit CoT methods face two significant challenges: (1) they fail to preserve the semantic alignment between the implicit reasoning (when transformed to natural language) and the ground-truth reasoning, resulting in a significant CoT performance degradation, and (2) they focus on reducing the length of the implicit reasoning; however, they neglect the considerable time cost for an LLM to generate one individual implicit reasoning token. To tackle these challenges, we propose a novel semantically-aligned implicit CoT framework termed SemCoT. In particular, for the first challenge, we design a contrastively trained sentence transformer that evaluates semantic alignment between implicit and explicit reasoning, which is used to enforce semantic preservation during implicit reasoning optimization. To address the second challenge, we introduce an efficient implicit reasoning generator by finetuning a lightweight language model using knowledge distillation. This generator is guided by our sentence transformer to distill ground-truth reasoning into semantically aligned implicit reasoning, while also optimizing for accuracy. SemCoT is the first approach that enhances CoT efficiency by jointly optimizing token-level generation speed and preserving semantic alignment with ground-truth reasoning. Extensive experiments demonstrate the superior performance of SemCoT compared to state-of-the-art methods in both efficiency and effectiveness. Our code can be found at https://github.com/YinhanHe123/SemCoT/.
Problem

Research questions and friction points this paper is trying to address.

Improving semantic alignment between implicit and explicit reasoning steps
Reducing time cost of generating individual implicit reasoning tokens
Accelerating Chain-of-Thought reasoning while preserving reasoning accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses contrastively trained transformer for semantic alignment
Fine-tunes lightweight LM with knowledge distillation
Jointly optimizes token speed and reasoning accuracy
🔎 Similar Papers
No similar papers found.