🤖 AI Summary
To address computational redundancy in large language models (LLMs) arising from uniform linguistic density across intermediate reasoning steps and final answers, this paper proposes a dual-density inference framework. It decouples reasoning into high-density, symbolic internal computation—enabling compressed intermediate reasoning—and low-density, natural-language output generation for human-readable responses. The core innovation lies in the first formal distinction between the model’s “computational function” (optimized for efficiency) and “communication function” (optimized for interpretability), guiding the design of a density-adaptive decoding strategy and a structured Chain-of-Thought reformulation. Experiments across multiple complex reasoning benchmarks demonstrate up to 62% reduction in token consumption, with particularly pronounced gains on multi-step tasks, while maintaining or improving answer accuracy.
📝 Abstract
Large Language Models (LLMs) have shown impressive capabilities in complex reasoning tasks. However, current approaches employ uniform language density for both intermediate reasoning and final answers, leading to computational inefficiency. Our observation found that reasoning process serves a computational function for the model itself, while answering serves a communicative function for human understanding. This distinction enables the use of compressed, symbol-rich language for intermediate computations while maintaining human-readable final explanations. To address this inefficiency, we present Denser: underline{D}ual-dunderline{ens}ity infunderline{er}ence, a novel framework that optimizes information density separately for reasoning and answering phases. Our framework implements this through three components: a query processing module that analyzes input problems, a high-density compressed reasoning mechanism for efficient intermediate computations, and an answer generation component that translates compressed reasoning into human-readable solutions. Experimental evaluation across multiple reasoning question answering benchmarks demonstrates that Denser reduces token consumption by up to 62% compared to standard Chain-of-Thought methods while preserving or improving accuracy. These efficiency gains are particularly significant for complex multi-step reasoning problems where traditional methods generate extensive explanations.