IDEAlign: Comparing Large Language Models to Human Experts in Open-ended Interpretive Annotations

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM evaluation lacks scalable, empirically validated metrics to quantify alignment with human experts at the conceptual level in open-ended explanatory annotation tasks. To address this, we propose IDEAlign—the first expert-aligned benchmarking framework specifically designed for such tasks. IDEAlign introduces a novel “select-the-outlier” triplet judgment paradigm, integrating topic modeling, text embeddings, and an LLM-based adjudication mechanism to systematically capture and model expert judgments. Experimental results demonstrate that conventional lexical and vector-space similarity metrics fail to reflect the semantic dimensions prioritized by experts. In contrast, LLMs prompted with IDEAlign achieve significantly improved expert alignment—gaining 9%–30% absolute consistency over baselines on two real-world educational datasets. These findings validate IDEAlign as an effective and superior evaluation paradigm for assessing conceptual fidelity in explanatory annotation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly applied to open-ended, interpretive annotation tasks, such as thematic analysis by researchers or generating feedback on student work by teachers. These tasks involve free-text annotations requiring expert-level judgments grounded in specific objectives (e.g., research questions or instructional goals). Evaluating whether LLM-generated annotations align with those generated by expert humans is challenging to do at scale, and currently, no validated, scalable measure of similarity in ideas exists. In this paper, we (i) introduce the scalable evaluation of interpretive annotation by LLMs as a critical and understudied task, (ii) propose IDEAlgin, an intuitive benchmarking paradigm for capturing expert similarity ratings via a "pick-the-odd-one-out" triplet judgment task, and (iii) evaluate various similarity metrics, including vector-based ones (topic models, embeddings) and LLM-as-a-judge via IDEAlgin, against these human benchmarks. Applying this approach to two real-world educational datasets (interpretive analysis and feedback generation), we find that vector-based metrics largely fail to capture the nuanced dimensions of similarity meaningful to experts. Prompting LLMs via IDEAlgin significantly improves alignment with expert judgments (9-30% increase) compared to traditional lexical and vector-based metrics. These results establish IDEAlgin as a promising paradigm for evaluating LLMs against open-ended expert annotations at scale, informing responsible deployment of LLMs in education and beyond.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM alignment with human expert annotations
Measuring similarity in open-ended interpretive tasks
Assessing scalable metrics for expert judgment comparison
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes IDEAlign benchmark using pick-the-odd-one-out triplet task
Evaluates vector metrics and LLM-as-judge against human expert ratings
Uses LLM prompting to significantly improve alignment with expert judgments
🔎 Similar Papers
No similar papers found.
H
Hyunji Nam
Stanford University
L
Lucia Langlois
Stanford University
J
James Malamut
Stanford University
M
Mei Tan
Stanford University
Dorottya Demszky
Dorottya Demszky
Assistant Professor, Stanford University
natural language processingeducation data scienceteacher professional learning