🤖 AI Summary
Implicit evaluation criteria in structured writing assessment are often missing, leading to incomplete coverage and low accuracy. Method: This paper proposes the first automatic framework for discovering implicit evaluation criteria grounded in external authoritative evidence—leveraging expert web resource mining, multi-source guideline parsing, prompt-driven LLM reasoning and clustering, and evidence alignment with criterion distillation. Contribution/Results: The framework systematically identifies high-precision, actionable, long-tail, and human-validated evaluation dimensions. Of the generated criteria, 92% are implicit (not explicitly specified in the initial prompt) and 87% achieve high lexical precision. Model compliance with criteria rises from 31% to 76%. Human-centered evaluation shows a 34% improvement in criterion coverage over pure LLM-based approaches, significantly extending the capability boundary of large language models in professional writing assessment.
📝 Abstract
Evaluation of language model outputs on structured writing tasks is typically conducted with a number of desirable criteria presented to human evaluators or large language models (LLMs). For instance, on a prompt like"Help me draft an academic talk on coffee intake vs research productivity", a model response may be evaluated for criteria like accuracy and coherence. However, high-quality responses should do more than just satisfy basic task requirements. An effective response to this query should include quintessential features of an academic talk, such as a compelling opening, clear research questions, and a takeaway. To help identify these implicit criteria, we introduce EvalAgent, a novel framework designed to automatically uncover nuanced and task-specific criteria. EvalAgent first mines expert-authored online guidance. It then uses this evidence to propose diverse, long-tail evaluation criteria that are grounded in reliable external sources. Our experiments demonstrate that the grounded criteria produced by EvalAgent are often implicit (not directly stated in the user's prompt), yet specific (high degree of lexical precision). Further, EvalAgent criteria are often not satisfied by initial responses but they are actionable, such that responses can be refined to satisfy them. Finally, we show that combining LLM-generated and EvalAgent criteria uncovers more human-valued criteria than using LLMs alone.