Can Unconfident LLM Annotations Be Used for Confident Conclusions?

📅 2024-08-27
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the problem of unreliable downstream statistical inference in computational social science caused by unstable annotation quality from large language models (LLMs). We propose Confidence-Driven Inference (CDI), a novel framework that jointly leverages LLM-generated annotations, LLM-assigned confidence scores, statistical learning theory, and active sampling. CDI constitutes the first hybrid annotation framework with formal theoretical guarantees: it ensures that statistical estimation accuracy and confidence interval coverage are no worse than those achieved under purely human annotation, while intelligently selecting only low-confidence samples for human verification. Evaluated on three canonical tasks—politeness detection, stance classification, and bias identification—CDI reduces human annotation effort by over 25% while maintaining estimation accuracy and confidence interval coverage at levels satisfying statistical validity criteria.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown high agreement with human raters across a variety of tasks, demonstrating potential to ease the challenges of human data collection. In computational social science (CSS), researchers are increasingly leveraging LLM annotations to complement slow and expensive human annotations. Still, guidelines for collecting and using LLM annotations, without compromising the validity of downstream conclusions, remain limited. We introduce Confidence-Driven Inference: a method that combines LLM annotations and LLM confidence indicators to strategically select which human annotations should be collected, with the goal of producing accurate statistical estimates and provably valid confidence intervals while reducing the number of human annotations needed. Our approach comes with safeguards against LLM annotations of poor quality, guaranteeing that the conclusions will be both valid and no less accurate than if we only relied on human annotations. We demonstrate the effectiveness of Confidence-Driven Inference over baselines in statistical estimation tasks across three CSS settings--text politeness, stance, and bias--reducing the needed number of human annotations by over 25% in each. Although we use CSS settings for demonstration, Confidence-Driven Inference can be used to estimate most standard quantities across a broad range of NLP problems.
Problem

Research questions and friction points this paper is trying to address.

Ensure validity of LLM annotations
Reduce human annotation workload
Improve statistical estimation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines LLM annotations with confidence indicators
Reduces human annotations by over 25%
Ensures valid and accurate statistical estimates
🔎 Similar Papers
No similar papers found.