🤖 AI Summary
This work addresses the challenge of balancing efficiency and reasoning flexibility in verifying reasoning models within high-stakes scenarios. The authors propose InterWhen, a test-time verification framework that leverages meta-prompts to guide models in embedding verifiable structures into their original reasoning trajectories. This approach supports both self-verification and external verification modes, enabling dynamic parsing, validation, and, when necessary, intervention on intermediate reasoning steps—without disrupting the model’s autonomous reasoning process. Experimental results demonstrate that, under self-verification, InterWhen achieves state-of-the-art performance in early stopping for reasoning with no loss in accuracy. In external verification mode, it improves accuracy by 10 percentage points over existing test-time extension methods, attains 100% reliability, and enhances computational efficiency by a factor of four.
📝 Abstract
We present a test-time verification framework, interwhen, that ensures that the output of a reasoning model is valid wrt. a given set of verifiers. Verified reasoning is an important goal in high-stakes scenarios such as deploying agents in the physical world or in domains such as law and finance. However, current techniques either rely on the generate-test paradigm that verifies only after the final answer is produced, or verify partial output through a step-extraction paradigm where the task execution is externally broken down into structured steps. The former is inefficient while the latter artificially restricts a model's problem solving strategies. Instead, we propose to verify a model's reasoning trace as-is, taking full advantage of a model's reasoning capabilities while verifying and steering the model's output only when needed. The key idea is meta-prompting, identifying the verifiable properties that any partial solution should satisfy and then prompting the model to follow a custom format in its trace such that partial outputs can be easily parsed and checked. We consider both self-verification and external verification and find that interwhen provides a useful abstraction to provide feedback and steer reasoning models in each case. Using self-verification, interwhen obtains state-of-the-art results on early stopping reasoning models, without any loss in accuracy. Using external verifiers, interwhen obtains 10 p.p. improvement in accuracy over test-time scaling methods, while ensuring 100% soundness and being 4x more efficient. The code for interwhen is available at https://github.com/microsoft/interwhen