HREF: Human Response-Guided Evaluation of Instruction Following in Language Models

📅 2024-12-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the misalignment with human judgments caused by reliance on powerful, potentially biased LLM-based evaluators in instruction-following assessment, this paper proposes a human-guided evaluation paradigm. Methodologically, we (1) construct HREF, the first contamination-free, task-fine-grained benchmark comprising 4,258 human-annotated samples; (2) design a composite evaluation framework that integrates human responses as orthogonal contextual signals—incorporating multi-judge comparison, human-response injection, task-adaptive strategies, and controllable prompt templates; and (3) release a real-time leaderboard featuring a private test set. Experimental results demonstrate that our approach achieves up to a 3.2 percentage-point improvement in agreement with human judgments across diverse tasks, significantly enhancing evaluation reliability, interpretability, and human alignment.

Technology Category

Application Category

📝 Abstract
Evaluating the capability of Large Language Models (LLMs) in following instructions has heavily relied on a powerful LLM as the judge, introducing unresolved biases that deviate the judgments from human judges. In this work, we reevaluate various choices for automatic evaluation on a wide range of instruction-following tasks. We experiment with methods that leverage human-written responses and observe that they enhance the reliability of automatic evaluations across a wide range of tasks, resulting in up to a 3.2% improvement in agreement with human judges. We also discovered that human-written responses offer an orthogonal perspective to model-generated responses in following instructions and should be used as an additional context when comparing model responses. Based on these observations, we develop a new evaluation benchmark, Human Response-Guided Evaluation of Instruction Following (HREF), comprising 4,258 samples across 11 task categories with a composite evaluation setup, employing a composite evaluation setup that selects the most reliable method for each category. In addition to providing reliable evaluation, HREF emphasizes individual task performance and is free from contamination. Finally, we study the impact of key design choices in HREF, including the size of the evaluation set, the judge model, the baseline model, and the prompt template. We host a live leaderboard that evaluates LLMs on the private evaluation set of HREF.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM instruction-following bias with human responses
Improving automatic evaluation reliability using human-written data
Creating a contamination-free benchmark for diverse task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages human-written responses for evaluation
Uses composite setup for reliable assessment
Employs orthogonal human-model response comparison
🔎 Similar Papers
No similar papers found.