Are Chatbots Reliable Text Annotators? Sometimes

📅 2023-11-09
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating the reliability and suitability of large language models (LLMs) for foundational NLP tasks—particularly binary annotation of social media text—remains challenging due to inconsistent benchmarks, opaque evaluation protocols, and insufficient attention to reproducibility, data privacy, and cost. Method: We conduct a systematic, standardized comparison of multiple open-source LLMs and ChatGPT across zero-shot, few-shot, and custom-prompting strategies, benchmarking against a supervised DistilBERT fine-tuned model under rigorously controlled, GDPR-compliant, and cost-transparent conditions. Contribution/Results: Performance varies significantly across models and tasks; DistilBERT consistently outperforms all LLMs in both robustness and accuracy. While ChatGPT occasionally achieves higher scores, its black-box nature, privacy risks, and prohibitive cost render it unsuitable for open scientific research. We propose the “Annotation-as-Experiment” framework and advocate principled, transparent use of open annotation tools to foster scientifically grounded, trustworthy LLM adoption in NLP.
📝 Abstract
Recent research highlights the significant potential of ChatGPT for text annotation in social science research. However, ChatGPT is a closed-source product which has major drawbacks with regards to transparency, reproducibility, cost, and data protection. Recent advances in open-source (OS) large language models (LLMs) offer an alternative without these drawbacks. Thus, it is important to evaluate the performance of OS LLMs relative to ChatGPT and standard approaches to supervised machine learning classification. We conduct a systematic comparative evaluation of the performance of a range of OS LLMs alongside ChatGPT, using both zero- and few-shot learning as well as generic and custom prompts, with results compared to supervised classification models. Using a new dataset of tweets from US news media, and focusing on simple binary text annotation tasks, we find significant variation in the performance of ChatGPT and OS models across the tasks, and that the supervised classifier using DistilBERT generally outperforms both. Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science we advise caution when using ChatGPT for substantive text annotation tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluate open-source LLMs vs ChatGPT
Compare OS LLMs with supervised models
Assess reliability of ChatGPT for annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates open-source LLMs performance
Compares zero-shot and few-shot learning
Uses DistilBERT for supervised classification
R
R. Kristensen-McLachlan
Center for Humanities Computing, Department of Linguistics, Cognitive Science, and Semiotics, Aarhus University
M
Miceal Canavan
Department of Political Science, Aarhus University
M
M'arton Kardos
Aarhus University
Mia Jacobsen
Mia Jacobsen
Center for Humanities Computing, Aarhus University
cognitive sciencecomputational linguisticsdigital humanitiesmachine learning
L
L. Aarøe
Aarhus Institute of Advanced Studies, Aarhus University