Dual-Density Inference for Efficient Language Model Reasoning

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address computational redundancy in large language models (LLMs) arising from uniform linguistic density across intermediate reasoning steps and final answers, this paper proposes a dual-density inference framework. It decouples reasoning into high-density, symbolic internal computation—enabling compressed intermediate reasoning—and low-density, natural-language output generation for human-readable responses. The core innovation lies in the first formal distinction between the model’s “computational function” (optimized for efficiency) and “communication function” (optimized for interpretability), guiding the design of a density-adaptive decoding strategy and a structured Chain-of-Thought reformulation. Experiments across multiple complex reasoning benchmarks demonstrate up to 62% reduction in token consumption, with particularly pronounced gains on multi-step tasks, while maintaining or improving answer accuracy.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown impressive capabilities in complex reasoning tasks. However, current approaches employ uniform language density for both intermediate reasoning and final answers, leading to computational inefficiency. Our observation found that reasoning process serves a computational function for the model itself, while answering serves a communicative function for human understanding. This distinction enables the use of compressed, symbol-rich language for intermediate computations while maintaining human-readable final explanations. To address this inefficiency, we present Denser: underline{D}ual-dunderline{ens}ity infunderline{er}ence, a novel framework that optimizes information density separately for reasoning and answering phases. Our framework implements this through three components: a query processing module that analyzes input problems, a high-density compressed reasoning mechanism for efficient intermediate computations, and an answer generation component that translates compressed reasoning into human-readable solutions. Experimental evaluation across multiple reasoning question answering benchmarks demonstrates that Denser reduces token consumption by up to 62% compared to standard Chain-of-Thought methods while preserving or improving accuracy. These efficiency gains are particularly significant for complex multi-step reasoning problems where traditional methods generate extensive explanations.
Problem

Research questions and friction points this paper is trying to address.

Optimizes information density for reasoning and answering phases
Reduces token consumption in language model reasoning tasks
Improves computational efficiency while maintaining human-readable explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-density inference separates reasoning and answering phases
High-density compressed reasoning reduces intermediate token consumption
Framework maintains human-readable answers while improving computational efficiency
🔎 Similar Papers
No similar papers found.
Zhengyi Zhao
Zhengyi Zhao
The Chinese University of Hong Kong
Natural Language ProcessMachine LearningInformation Extraction
S
Shubo Zhang
University of International Relations
Yuxi Zhang
Yuxi Zhang
University of Illinois, Urbana-Champaign
condensed matter physics
H
Huimin Wang
Shenzhen University
B
Binyang Li
University of International Relations
K
Kam-Fai Wong
The Chinese University of Hong Kong