Decomposing Reasoning Efficiency in Large Language Models

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevailing focus on final accuracy in large language model (LLM) evaluation, which often overlooks token efficiency and waste during reasoning. The authors propose the first framework for analyzing optional reasoning trajectories, decoupling reasoning efficiency into completion rate, conditional correctness, and redundancy. Redundancy is further decomposed into per-task token overhead and a workload coupling coefficient. Leveraging deterministic quality metrics—such as groundedness, repetitiveness, and prompt copying—that require neither human annotation nor LLM-based judgment, the study conducts a systematic evaluation of 25 models using the CogniLoad benchmark. Results reveal a significant misalignment between accuracy and efficiency rankings (Spearman ρ = 0.63), with efficiency differences primarily driven by conditional correctness. Per-task token overhead varies by nearly 9× across models and exhibits only weak correlation with model scale.

Technology Category

Application Category

📝 Abstract
Large language models trained for reasoning trade off inference tokens against accuracy, yet standard evaluations report only final accuracy, obscuring where tokens are spent or wasted. We introduce a trace-optional framework that decomposes token efficiency into interpretable factors: completion under a fixed token budget (avoiding truncation), conditional correctness given completion, and verbosity (token usage). When benchmark metadata provides per-instance workload proxies, we further factor verbosity into two components: mean verbalization overhead (tokens per work unit) and a coupling coefficient capturing how overhead scales with task workload. When reasoning traces are available, we add deterministic trace-quality measures (grounding, repetition, prompt copying) to separate degenerate looping from verbose-but-engaged reasoning, avoiding human labeling and LLM judges. Evaluating 25 models on CogniLoad, we find that accuracy and token-efficiency rankings diverge (Spearman $\rho=0.63$), efficiency gaps are often driven by conditional correctness, and verbalization overhead varies by about 9 times (only weakly related to model scale). Our decomposition reveals distinct bottleneck profiles that suggest different efficiency interventions.
Problem

Research questions and friction points this paper is trying to address.

reasoning efficiency
token efficiency
large language models
inference tokens
model evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

token efficiency
reasoning decomposition
verbosity analysis
trace-quality metrics
conditional correctness
🔎 Similar Papers
No similar papers found.