🤖 AI Summary
Current LLM evaluation lacks scalable, empirically validated metrics to quantify alignment with human experts at the conceptual level in open-ended explanatory annotation tasks. To address this, we propose IDEAlign—the first expert-aligned benchmarking framework specifically designed for such tasks. IDEAlign introduces a novel “select-the-outlier” triplet judgment paradigm, integrating topic modeling, text embeddings, and an LLM-based adjudication mechanism to systematically capture and model expert judgments. Experimental results demonstrate that conventional lexical and vector-space similarity metrics fail to reflect the semantic dimensions prioritized by experts. In contrast, LLMs prompted with IDEAlign achieve significantly improved expert alignment—gaining 9%–30% absolute consistency over baselines on two real-world educational datasets. These findings validate IDEAlign as an effective and superior evaluation paradigm for assessing conceptual fidelity in explanatory annotation.
📝 Abstract
Large language models (LLMs) are increasingly applied to open-ended, interpretive annotation tasks, such as thematic analysis by researchers or generating feedback on student work by teachers. These tasks involve free-text annotations requiring expert-level judgments grounded in specific objectives (e.g., research questions or instructional goals). Evaluating whether LLM-generated annotations align with those generated by expert humans is challenging to do at scale, and currently, no validated, scalable measure of similarity in ideas exists. In this paper, we (i) introduce the scalable evaluation of interpretive annotation by LLMs as a critical and understudied task, (ii) propose IDEAlgin, an intuitive benchmarking paradigm for capturing expert similarity ratings via a "pick-the-odd-one-out" triplet judgment task, and (iii) evaluate various similarity metrics, including vector-based ones (topic models, embeddings) and LLM-as-a-judge via IDEAlgin, against these human benchmarks. Applying this approach to two real-world educational datasets (interpretive analysis and feedback generation), we find that vector-based metrics largely fail to capture the nuanced dimensions of similarity meaningful to experts. Prompting LLMs via IDEAlgin significantly improves alignment with expert judgments (9-30% increase) compared to traditional lexical and vector-based metrics. These results establish IDEAlgin as a promising paradigm for evaluating LLMs against open-ended expert annotations at scale, informing responsible deployment of LLMs in education and beyond.