The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the root cause of performance degradation in large language models (LLMs) on long-horizon tasks as step count increases. We identify self-conditioned error propagation during execution—not inherent reasoning limitations—as the primary failure mechanism; even marginal improvements in single-step accuracy yield exponential gains in task completion length. To address this, we propose a decoupled evaluation framework for execution capability, introduce a long-horizon task benchmark, and integrate context-aware error tracking with explicit knowledge and plan injection. We conduct comparative analysis across mainstream and state-of-the-art “reasoning models.” Experiments show that scaling model size alone fails to mitigate execution degradation, whereas advanced reasoning models effectively prevent error accumulation, enabling longer task completion within single steps and substantially extending reliable execution horizons. This is the first systematic study to characterize LLM execution degradation mechanisms and empirically validate the critical role of structured reasoning paradigms in long-duration tasks.

Technology Category

Application Category

📝 Abstract
Does continued scaling of large language models (LLMs) yield diminishing returns? Real-world value often stems from the length of task an agent can complete. We start this work by observing the simple but counterintuitive fact that marginal gains in single-step accuracy can compound into exponential improvements in the length of a task a model can successfully complete. Then, we argue that failures of LLMs when simple tasks are made longer arise from mistakes in execution, rather than an inability to reason. We propose isolating execution capability, by explicitly providing the knowledge and plan needed to solve a long-horizon task. We find that larger models can correctly execute significantly more turns even when small models have 100% single-turn accuracy. We observe that the per-step accuracy of models degrades as the number of steps increases. This is not just due to long-context limitations -- curiously, we observe a self-conditioning effect -- models become more likely to make mistakes when the context contains their errors from prior turns. Self-conditioning does not reduce by just scaling the model size. In contrast, recent thinking models do not self-condition, and can also execute much longer tasks in a single turn. We conclude by benchmarking frontier thinking models on the length of task they can execute in a single turn. Overall, by focusing on the ability to execute, we hope to reconcile debates on how LLMs can solve complex reasoning problems yet fail at simple tasks when made longer, and highlight the massive benefits of scaling model size and sequential test-time compute for long-horizon tasks.
Problem

Research questions and friction points this paper is trying to address.

Investigating diminishing returns in LLM scaling for long tasks
Analyzing execution failures versus reasoning inaccuracies in LLMs
Measuring self-conditioning effects on error propagation in multi-step tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Isolating execution capability by providing knowledge
Larger models execute more turns despite accuracy
Self-conditioning effect increases mistakes with prior errors
🔎 Similar Papers
No similar papers found.