🤖 AI Summary
This work addresses the inefficiency, error propagation, and trajectory fragility of existing large language model (LLM) code agents in real-world software engineering tasks, which stem from the absence of fine-grained feedback on intermediate decisions. The study introduces, for the first time, a process reward model (PRM) into repository-scale code agents. Leveraging SWE-Bench trajectories, the authors construct an action-level reward dataset to train a lightweight PRM that evaluates the utility of intermediate actions. During inference, the PRM guides the agent via action scoring and selection, prioritizing high-reward steps. This approach enables step-level supervision without full reinforcement learning, significantly improving decision consistency and trajectory stability. Evaluation on SWE-Bench Verified demonstrates that intermediate rewards effectively guide task execution, while also revealing challenges in aligning such rewards with final task success.
📝 Abstract
Automating real-world software engineering tasks remains challenging for large language model (LLM)-based agents due to the need for long-horizon reasoning over large, evolving codebases and making consistent decisions across interdependent actions. Existing approaches typically rely on static prompting strategies or handcrafted heuristics to select actions such as code editing, file navigation, and test execution, but they lack fine-grained feedback on intermediate decisions. This leads to inefficient exploration, error propagation, and brittle solution trajectories. To address this limitation, we propose SWE-Shepherd, a framework that introduces Process Reward Models (PRMs) to provide dense, step-level supervision for repository-level code agents. Using trajectories from SWE-Bench, we construct an action-level reward dataset and train a lightweight reward model on a base LLM to estimate the usefulness of intermediate actions. During inference, the PRM evaluates candidate actions and guides the agent toward higher-reward decisions without requiring full reinforcement learning. Experiments on SWE-Bench Verified demonstrate improved interaction efficiency and action quality, while also highlighting challenges in aligning intermediate rewards with final task success.