🤖 AI Summary
This work challenges the prevailing “frozen-world” assumption in evaluating large reasoning models (LRMs), exposing critical robustness deficiencies under realistic, dynamically evolving environments. Focusing on long-horizon reasoning tasks—such as assisted programming—that involve prolonged inference and continuous environmental changes, we propose two novel evaluation dimensions: *interruption testing*, which measures partial output quality under bounded computational budgets, and *dynamic context testing*, which assesses model adaptability to external state changes occurring mid-inference. Leveraging mathematical and programming long-chain reasoning benchmarks, we identify previously unreported failure modes—including *reasoning leakage*, *panic decisions*, and *self-doubt*—arising specifically in dynamic settings. Experiments demonstrate that state-of-the-art LRMs suffer up to 60% performance degradation under dynamic conditions, revealing that static evaluation grossly overestimates real-world robustness. Our study establishes a foundational evaluation framework and provides empirical evidence essential for the trustworthy deployment of LRMs.
📝 Abstract
Large Reasoning Models (LRMs) excel at complex reasoning but are traditionally evaluated in static, "frozen world" settings: model responses are assumed to be instantaneous, and the context of a request is presumed to be immutable over the duration of the response. While generally true for short-term tasks, the "frozen world" assumption breaks down in modern reasoning tasks such as assistive programming, where models may take hours to think through problems and code may change dramatically from the time the model starts thinking to the model's final output. In this work, we challenge the frozen world assumption and evaluate LRM robustness under two realistic dynamic scenarios: interruptions, which test the quality of the model's partial outputs on a limited budget, and dynamic context, which tests model adaptation to in-flight changes. Across mathematics and programming benchmarks that require long-form reasoning, static evaluations consistently overestimate robustness: even state-of-the-art LRMs, which achieve high accuracy in static settings, can fail unpredictably when interrupted or exposed to changing context, with performance dropping by up to 60% when updates are introduced late in the reasoning process. Our analysis further reveals several novel failure modes, including reasoning leakage, where models fold the reasoning into their final answer when interrupted; panic, where under time pressure models abandon reasoning entirely and return incorrect answers; and self-doubt, where performance degrades while incorporating updated information.