🤖 AI Summary
Current LLM-based automatic evaluation predominantly relies on simplistic aggregation methods such as majority voting, which are highly susceptible to failure due to a single erroneous response. To address this, we propose a multi-agent debate framework that enhances evaluation accuracy and robustness through collaborative reasoning and iterative refinement. Our key contributions are threefold: (1) a time-adaptive Beta-Binomial mixture model that characterizes the evolution of agent confidence over debate rounds; (2) a dynamic termination mechanism leveraging the Kolmogorov–Smirnov test and distributional similarity metrics to assess consensus stability; and (3) an explicit consensus dynamics model that formalizes the convergence process of collective judgment. Extensive experiments across multiple benchmarks and state-of-the-art LLMs demonstrate significant accuracy improvements over majority voting, while maintaining computational efficiency.
📝 Abstract
With advancements in reasoning capabilities, Large Language Models (LLMs) are increasingly employed for automated judgment tasks. While LLMs-as-Judges offer promise in automating evaluations, current approaches often rely on simplistic aggregation methods (e.g., majority voting), which can fail even when individual agents provide correct answers. To address this, we propose a multi-agent debate judge framework where agents collaboratively reason and iteratively refine their responses. We formalize the debate process mathematically, analyzing agent interactions and proving that debate amplifies correctness compared to static ensembles. To enhance efficiency, we introduce a stability detection mechanism that models judge consensus dynamics via a time-varying Beta-Binomial mixture, with adaptive stopping based on distributional similarity (Kolmogorov-Smirnov test). This mechanism models the judges'collective correct rate dynamics using a time-varying mixture of Beta-Binomial distributions and employs an adaptive stopping criterion based on distributional similarity (Kolmogorov-Smirnov statistic). Experiments across multiple benchmarks and models demonstrate that our framework improves judgment accuracy over majority voting while maintaining computational efficiency.