🤖 AI Summary
Large language models (LLMs) employed as automated evaluators (“LLM-as-a-Judge”) suffer from self-preference bias—systematically overrating responses generated by themselves—thereby compromising evaluation reliability. To address this, we propose Genii, an unsupervised multi-agent collaborative framework that emulates a client-server interactive group-voting mechanism without requiring human annotations. Methodologically, Genii employs a heterogeneous LLM ensemble architecture, dynamically assigning distinct roles to agents and aggregating judgments via consensus-based voting to mitigate bias. Notably, the server agent can be instantiated with a weaker, more efficient model, enhancing deployment flexibility. Experiments across multiple benchmarks and diverse client LLMs demonstrate that Genii consistently outperforms supervised baselines, yielding evaluations that are significantly more robust and fair.
📝 Abstract
Large Language Models (LLMs) as automatic evaluators, commonly referred to as LLM-as-a-Judge, have also attracted growing attention. This approach plays a vital role in aligning LLMs with human judgments, providing accurate and reliable assessments. However, LLM-based judgment models often exhibit judgment preference bias during the evaluation phase, tending to favor responses generated by themselves, undermining the reliability of their judgments. This paper introduces the Group-Based Polling Optimization (Genii), an unsupervised multi-agent collaborative optimization framework that mitigates the inherent judgment preference bias of judgment models. Specifically, Genii integrates various LLM-based judgment models into a multi-agent system and simulates the interactive client-server polling mechanism to optimize each client agent unsupervisedly. Our experiments demonstrate that Genii outperforms supervised models trained on annotated judgment data, while requiring no human-labeled annotations. Genii consistently improves performance across different client agents during the polling, even when weaker models act as server agents. Further analysis reveals that Genii effectively mitigates judgment preference bias of LLM-based judgment models, demonstrating its effectiveness. All codes are available at https://github.com/NEUIR/Genii.