Contextualized Evaluations: Taking the Guesswork Out of Language Model Evaluations

📅 2024-11-11
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
To address the ambiguity and subjectivity in evaluating language model responses to fuzzy queries (e.g., subjective or open-ended questions), this paper proposes a contextualized evaluation protocol that embeds structured context—such as synthesized user identities, query intents, and utility criteria—into the assessment process. Methodologically, it integrates context synthesis modeling, multi-dimensional human evaluation design, cross-dimensional quality analysis, and bias-sensitivity quantification. Our study is the first to systematically uncover mainstream models’ implicit preference for WEIRD (Western, Educated, Industrialized, Rich, Democratic) contexts and reveal pronounced asymmetry in their contextual adherence capabilities. Experiments demonstrate that the protocol reverses relative model win rates, mitigates superficial stylistic biases, and yields fine-grained behavioral insights—thereby substantially enhancing evaluation objectivity, interpretability, and diagnostic utility.

Technology Category

Application Category

📝 Abstract
Language model users often issue queries that lack specification, where the context under which a query was issued -- such as the user's identity, the query's intent, and the criteria for a response to be useful -- is not explicit. For instance, a good response to a subjective query like"What book should I read next?"would depend on the user's preferences, and a good response to an open-ended query like"How do antibiotics work against bacteria?"would depend on the user's expertise. This makes evaluation of responses to such queries an ill-posed task, as evaluators may make arbitrary judgments about the response quality. To remedy this, we present contextualized evaluations, a protocol that synthetically constructs context surrounding an underspecified query and provides it during evaluation. We find that the presence of context can 1) alter conclusions drawn from evaluation, even flipping win rates between model pairs, 2) nudge evaluators to make fewer judgments based on surface-level criteria, like style, and 3) provide new insights about model behavior across diverse contexts. Specifically, our procedure uncovers an implicit bias towards WEIRD contexts in models'"default"responses and we find that models are not equally sensitive to following different contexts, even when they are provided in prompts.
Problem

Research questions and friction points this paper is trying to address.

Evaluating underspecified queries lacks explicit context criteria
Current evaluations make arbitrary judgments on response quality
Models show bias towards WEIRD contexts in default responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic context construction for underspecified queries
Context alters evaluation conclusions and rankings
Identifies bias in model responses to diverse contexts
🔎 Similar Papers
No similar papers found.