JudgeAgent: Dynamically Evaluate LLMs with Agent-as-Interviewer

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation methods suffer from limited interactivity, uncontrolled question difficulty, and poor verifiability of results, hindering precise characterization of model capability boundaries. To address these limitations, we propose JudgeAgent—a dynamic, agent-based interview-style evaluation framework that enables fine-grained, interpretable LLM assessment through knowledge-driven interactive questioning, goal-adaptive difficulty modulation, and a closed-loop feedback mechanism. Its core innovation lies in formalizing evaluation as a bilateral “interviewer–candidate” interaction, integrating agent systems, synthetic data generation, and interactive expansion strategies, validated via multidimensional benchmark scoring. Experiments demonstrate that JudgeAgent significantly outperforms static evaluation methods, yielding more realistic, robust identification of LLMs’ knowledge gaps and capability limits.

Technology Category

Application Category

📝 Abstract
Evaluating the capabilities of large language models (LLMs) is an essential step to ensure the successful application of LLMs across various domains. The current evaluation of LLMs is based on a paradigm that involves querying them with predefined question sets and assessing their outputs. This paradigm offers controllable processes and simplicity, but faces challenges such as limited interaction with targets, insufficient difficulty control, and difficulties in verifying the validity of evaluation results, making it hard to precisely determine the knowledge and capability boundaries of target models. To address these challenges, we propose JudgeAgent, a knowledge-target adaptive dynamic evaluation framework based on a new interviewer-style evaluation paradigm. JudgeAgent employs a comprehensive evaluation approach consisting of benchmark grading, interactive extension, and evaluation feedback. It utilizes knowledge-driven data synthesis and target-adaptive difficulty adjustment methods to conduct extended testing, providing accurate and effective evaluation results. We also introduce a novel insight into validating evaluation methods, demonstrating the effectiveness of JudgeAgent and its dynamic evaluation paradigm through extensive experiments.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM capabilities with dynamic agent-as-interviewer approach
Addressing limitations of static predefined question evaluation methods
Developing adaptive difficulty adjustment for precise knowledge boundary testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-as-Interviewer dynamic evaluation framework
Knowledge-driven synthesis with adaptive difficulty
Benchmark grading with interactive extension
🔎 Similar Papers
No similar papers found.