🤖 AI Summary
This work addresses key limitations in existing search-augmented reasoning methods, which often suffer from redundant evidence retrieval, context saturation, and training instability due to unconstrained search. To overcome these issues, the authors propose DeepControl, a novel framework that introduces—for the first time—a formal adaptive control mechanism grounded in information utility theory. This mechanism dynamically regulates the timing, termination, and granularity of retrieval during inference. Complemented by an annealing-based training strategy, DeepControl guides the model to internalize efficient information-seeking behaviors. Unlike conventional reinforcement learning approaches that rely solely on outcome-based feedback, the proposed method achieves significant performance gains across seven benchmarks, outperforming strong baselines by average margins of 9.4% and 8.6% on Qwen2.5-7B and Qwen2.5-3B models, respectively.
📝 Abstract
Search-augmented reasoning agents interleave multi-step reasoning with external information retrieval, but uncontrolled retrieval often leads to redundant evidence, context saturation, and unstable learning. Existing approaches rely on outcome-based reinforcement learning (RL), which provides limited guidance for regulating information acquisition. We propose DeepControl, a framework for adaptive information control based on a formal notion of information utility, which measures the marginal value of retrieved evidence under a given reasoning state. Building on this utility, we introduce retrieval continuation and granularity control mechanisms that selectively regulate when to continue and stop retrieval, and how much information to expand. An annealed control strategy enables the agent to internalize effective information acquisition behaviors during training. Extensive experiments across seven benchmarks demonstrate that our method consistently outperforms strong baselines. In particular, our approach achieves average performance improvements of 9.4% and 8.6% on Qwen2.5-7B and Qwen2.5-3B, respectively, over strong outcome-based RL baselines, and consistently outperforms both retrieval-free and retrieval-based reasoning methods without explicit information control. These results highlight the importance of adaptive information control for scaling search-augmented reasoning agents to complex, real-world information environments.