PRInTS: Reward Modeling for Long-Horizon Information Seeking

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing process reward models (PRMs) struggle to assess tool-call quality, output comprehension, and context inflation in long-horizon trajectories, limiting AI agents’ performance on multi-step information retrieval tasks. To address this, we propose PRInTS, a generative process reward model that introduces the first multi-dimensional, dense scoring framework for tool interactions and reasoning steps. PRInTS incorporates a trajectory summarization mechanism to compress extended contextual histories, enabling sustained, fine-grained evaluation. Our method integrates generative reward modeling, decomposition of quality signals across reasoning steps, and best-of-n sampling for efficient inference-time optimization—deployable even on open-weight models. Experiments demonstrate that PRInTS significantly outperforms prior PRMs on FRAMES, GAIA, and WebWalkerQA benchmarks. Notably, it achieves state-of-the-art or competitive performance using substantially smaller backbone models, validating its effectiveness and generalizability for complex agent-centric tasks.

Technology Category

Application Category

📝 Abstract
Information-seeking is a core capability for AI agents, requiring them to gather and reason over tool-generated information across long trajectories. However, such multi-step information-seeking tasks remain challenging for agents backed by language models. While process reward models (PRMs) can guide agents by ranking candidate steps at test-time, existing PRMs, designed for short reasoning with binary judgment, cannot capture richer dimensions of information-seeking steps, such as tool interactions and reasoning over tool outputs, nor handle the rapidly growing context in long-horizon tasks. To address these limitations, we introduce PRInTS, a generative PRM trained with dual capabilities: (1) dense scoring based on the PRM's reasoning across multiple step quality dimensions (e.g., interpretation of tool outputs, tool call informativeness) and (2) trajectory summarization that compresses the growing context while preserving essential information for step evaluation. Extensive evaluations across FRAMES, GAIA (levels 1-3), and WebWalkerQA (easy-hard) benchmarks on multiple models, along with ablations, reveal that best-of-n sampling with PRInTS enhances information-seeking abilities of open-source models as well as specialized agents, matching or surpassing the performance of frontier models with a much smaller backbone agent and outperforming other strong reward modeling baselines.
Problem

Research questions and friction points this paper is trying to address.

Reward models struggle with long-horizon information-seeking tasks
Existing PRMs cannot capture rich dimensions of tool interactions
Growing context in long trajectories challenges step evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative reward model with dual capabilities
Dense scoring across multiple quality dimensions
Trajectory summarization for context compression
🔎 Similar Papers
No similar papers found.