Evaluating LLMs Without Oracle Feedback: Agentic Annotation Evaluation Through Unsupervised Consistency Signals

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In dynamic unsupervised settings—where ground-truth labels are unavailable—evaluating the quality of LLM-generated annotations remains challenging. Method: This paper proposes a proxy-collaborative unsupervised evaluation paradigm, wherein a lightweight student model collaborates with an LLM to construct unsupervised signals based on cross-model output consistency. Its core innovation is the Consistency-Accuracy Index (CAI Ratio), a quantifiable metric that effectively characterizes annotation quality and guides model selection without ground-truth supervision. The method integrates user-preference-driven majority voting, student-teacher collaboration, and the CAI Ratio computation framework. Results: Evaluated across 10 cross-domain NLP datasets, CAI Ratio demonstrates strong positive correlation (average Spearman’s ρ = 0.89) with model accuracy across four mainstream LLMs and ten tasks, significantly outperforming existing unsupervised evaluation baselines.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs), when paired with prompt-based tasks, have significantly reduced data annotation costs and reliance on human annotators. However, evaluating the quality of their annotations remains challenging in dynamic, unsupervised environments where oracle feedback is scarce and conventional methods fail. To address this challenge, we propose a novel agentic annotation paradigm, where a student model collaborates with a noisy teacher (the LLM) to assess and refine annotation quality without relying on oracle feedback. The student model, acting as an unsupervised feedback mechanism, employs a user preference-based majority voting strategy to evaluate the consistency of the LLM outputs. To systematically measure the reliability of LLM-generated annotations, we introduce the Consistent and Inconsistent (CAI) Ratio, a novel unsupervised evaluation metric. The CAI Ratio not only quantifies the annotation quality of the noisy teacher under limited user preferences but also plays a critical role in model selection, enabling the identification of robust LLMs in dynamic, unsupervised environments. Applied to ten open-domain NLP datasets across four LLMs, the CAI Ratio demonstrates a strong positive correlation with LLM accuracy, establishing it as an essential tool for unsupervised evaluation and model selection in real-world settings.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM annotation quality without oracle feedback
Assessing annotation consistency through unsupervised signals
Enabling model selection in dynamic unsupervised environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic annotation paradigm with student-teacher collaboration
Unsupervised feedback via user preference majority voting
CAI Ratio metric for consistency-based quality evaluation
🔎 Similar Papers
No similar papers found.
C
Cheng Chen
Australian Artificial Intelligence Institute (AAII), University of Technology Sydney, Australia
Haiyan Yin
Haiyan Yin
Unknown affiliation
Reinforcement LearningMachine Learning
I
Ivor W. Tsang
College of Computing and Data Science, Nanyang Technological University, Singapore