๐ค AI Summary
This work addresses the high computational cost of chain-of-thought (CoT) reasoning in large language models, which stems from generating lengthy intermediate steps, and the lack of theoretical grounding in existing compression approaches. We introduce the Order-r Interaction theory, revealing that in implicit CoT compression, learning signals for high-order logical dependencies decay exponentiallyโa key factor behind performance degradation. To tackle this, we construct NatBool-DAG, a benchmark enforcing irreducible logical reasoning, and propose ALiCoT, an aligned implicit CoT framework that achieves efficient compression through alignment of implicit reasoning states and latent token distributions. Experiments show that ALiCoT attains a 54.4ร speedup in inference on NatBool-DAG while preserving accuracy comparable to explicit CoT.
๐ Abstract
Chain-of-Thought (CoT) has unlocked advanced reasoning abilities of Large Language Models (LLMs) with intermediate steps, yet incurs prohibitive computational costs due to generation of extra tokens. Recent studies empirically show that compressing reasoning steps into latent states, or implicit CoT compression, offers a token-efficient alternative. However, the mechanism behind CoT compression remains unclear. In this paper, we provide the first theoretical analysis of the difficulty of learning to internalize intermediate reasoning steps. By introducing Order-r Interaction, we prove that the learning signal for high-order logical dependencies exponentially decays to solve irreducible problem, where skipping intermediate steps inevitably leads to high-order interaction barriers. To empirically validate this, we introduce NatBool-DAG, a challenging benchmark designed to enforce irreducible logical reasoning and eliminate semantic shortcuts. Guided by our theoretical findings, we propose ALiCoT (Aligned Implicit CoT), a novel framework that overcomes the signal decay by aligning latent token distributions with intermediate reasoning states. Experimental results demonstrate that ALiCoT successfully unlocks efficient reasoning: it achieves a 54.4x speedup while maintaining performance comparable to explicit CoT.