🤖 AI Summary
Large language models (LLMs) suffer from high computational overhead and latency when employing chain-of-thought (CoT) reasoning, while existing CoT compression methods rely on manual prompt engineering or external compressed datasets—often discarding critical reasoning information. To address this, we propose Upfront CoT (UCoT), a novel framework that introduces a lightweight “compressor” model to automatically learn compact, task-relevant reasoning representations *before* answer generation; these embeddings are then consumed by a larger “executor” model for precise final inference. UCoT enables end-to-end joint training without human-designed prompts or external data curation, integrating thought embedding, collaborative workflow, and reward-driven optimization. On GSM8K, UCoT reduces token consumption of Qwen2.5-7B-Instruct by 50% while improving accuracy by 3.08% over the state of the art—demonstrating simultaneous gains in both inference efficiency and performance.
📝 Abstract
Recent developments have enabled advanced reasoning in Large Language Models (LLMs) via long Chain-of-Thought (CoT), while long CoT suffers from high computational costs and significant latency losses owing to the autoregressive nature of generative LLMs. CoT compression aims to improve efficiency in the reasoning process by reducing output length. Previous works trade reasoning efficiency by either laborious discrete prompt designing or the construction of external compressed CoT datasets that sacrifice key reasoning details. In this work, we propose Upfront CoT (UCoT): an efficient reasoning framework with upfront thought embedding to automate CoT compression. UCoT is a cooperative workflow involving a small model (compressor) and a large model (executor). The first stage of UCoT trains compressor to generate upfront thought embeddings rich in reasoning information for the executor, avoiding the drawbacks of manually designed prompts. The second stage optimizes executor to utilize upfront thought embeddings to derive the correct answer with short reasoning, using a reward mechanism. Extensive experiments show that UCoT maintains the powerful reasoning ability of executor while significantly reducing the length of CoT. It is worth mentioning that when applying UCoT to the Qwen2.5-7B-Instruct model, the usage of tokens on GSM8K dataset is reduced by 50%, while the performance is 3.08% higher than that of the state-of-the-art (SOTA) method. The code and dataset are in supplementary material.