🤖 AI Summary
This study addresses the limited reliability of large language models (LLMs) as evaluators in high-difficulty pairwise comparison tasks—particularly those involving factual consistency in long documents, mathematical reasoning, and code correctness. We propose an agent-based evaluation framework endowed with external verification capabilities: it dynamically retrieves real-time, verifiable evidence via explicit web search and code execution tools, and integrates structured prompt engineering to mitigate LLMs’ inherent knowledge biases and hallucinations. Experiments across multiple challenging benchmarks demonstrate substantial improvements in evaluator consistency and accuracy. Moreover, we identify strong sensitivity of evaluation performance to tool invocation strategies and prompt design. Our work advances non-saturating, empirically verifiable LLM evaluation paradigms and offers a novel pathway toward building trustworthy AI feedback systems.
📝 Abstract
Pairwise preferences over model responses are widely collected to evaluate and provide feedback to large language models (LLMs). Given two alternative model responses to the same input, a human or AI annotator selects the "better" response. This approach can provide feedback for domains where other hard-coded metrics are difficult to obtain (e.g., chat response quality), thereby helping model evaluation or training. However, for some domains high-quality pairwise comparisons can be tricky to obtain - from AI and humans. For example, for responses with many factual statements, annotators may disproportionately weigh writing quality rather than underlying facts. In this work, we explore augmenting standard AI annotator systems with additional tools to improve performance on three challenging response domains: long-form factual, math and code tasks. We propose a tool-using agentic system to provide higher quality feedback on these domains. Our system uses web-search and code execution to ground itself based on external validation, independent of the LLM's internal knowledge and biases. We provide extensive experimental results evaluating our method across the three targeted response domains as well as general annotation tasks, using RewardBench (incl. AlpacaEval and LLMBar), RewardMath, as well as three new datasets for domains with saturated pre-existing datasets. Our results indicate that external tools can indeed improve performance in many, but not all, cases. More generally, our experiments highlight the sensitivity of performance to simple parameters (e.g., prompt) and the need for improved (non-saturated) annotator benchmarks. We share our code at https://github.com/apple/ml-agent-evaluator.