MAS-ProVe: Understanding the Process Verification of Multi-Agent Systems

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high variance in reasoning trajectories of large language model (LLM)-based multi-agent systems (MAS) and the unclear efficacy of process-level verification. It presents the first systematic empirical investigation into process verification in MAS, evaluating three verification paradigms, two granularities, five verifier types, and four context management strategies across six MAS frameworks and multiple reasoning benchmarks. The results demonstrate that process-level verification does not consistently improve performance; LLM-as-a-Judge generally outperforms reward-model-based approaches; and trained, specialized judge models achieve the best results. The work further reveals the instability of verification effectiveness and a trade-off between context length and performance, confirming that process verification in MAS remains an open challenge.

Technology Category

Application Category

📝 Abstract
Multi-Agent Systems (MAS) built on Large Language Models (LLMs) often exhibit high variance in their reasoning trajectories. Process verification, which evaluates intermediate steps in trajectories, has shown promise in general reasoning settings, and has been suggested as a potential tool for guiding coordination of MAS; however, its actual effectiveness in MAS remains unclear. To fill this gap, we present MAS-ProVe, a systematic empirical study of process verification for multi-agent systems (MAS). Our study spans three verification paradigms (LLM-as-a-Judge, reward models, and process reward models), evaluated across two levels of verification granularity (agent-level and iteration-level). We further examine five representative verifiers and four context management strategies, and conduct experiments over six diverse MAS frameworks on multiple reasoning benchmarks. We find that process-level verification does not consistently improve performance and frequently exhibits high variance, highlighting the difficulty of reliably evaluating partial multi-agent trajectories. Among the methods studied, LLM-as-a-Judge generally outperforms reward-based approaches, with trained judges surpassing general-purpose LLMs. We further observe a small performance gap between LLMs acting as judges and as single agents, and identify a context-length-performance trade-off in verification. Overall, our results suggest that effective and robust process verification for MAS remains an open challenge, requiring further advances beyond current paradigms. Code is available at https://github.com/Wang-ML-Lab/MAS-ProVe.
Problem

Research questions and friction points this paper is trying to address.

Multi-Agent Systems
Process Verification
Large Language Models
Reasoning Trajectories
Verification Effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

process verification
multi-agent systems
LLM-as-a-Judge
reasoning trajectories
empirical evaluation
🔎 Similar Papers
No similar papers found.