Assessing the Reliability and Validity of GPT-4 in Annotating Emotion Appraisal Ratings

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the reliability and validity of GPT-4 as a “reader-annotator” for 21-dimensional emotion assessment—grounded in psychological theory—using Likert-scale annotation, benchmarked against human experts. Method: We introduce a novel single-prompt paradigm jointly predicting discrete emotion categories and continuous dimensional scores, augmented by a five-fold majority-voting ensemble. We further analyze the impact of event description length on model performance. Contribution/Results: GPT-4 achieves human-level inter-rater reliability (ICC > 0.8) in raw annotations; majority voting improves reliability by 12%. Longer event descriptions significantly enhance accuracy. The joint prediction yields an F1-score of 0.79 for discrete emotion classification. This work provides the first empirical evidence that large language models can match—and in certain dimensions surpass—human annotators in multidimensional affective assessment. It establishes a reproducible, high-efficiency paradigm for AI-assisted psychological annotation.

Technology Category

Application Category

📝 Abstract
Appraisal theories suggest that emotions arise from subjective evaluations of events, referred to as appraisals. The taxonomy of appraisals is quite diverse, and they are usually given ratings on a Likert scale to be annotated in an experiencer-annotator or reader-annotator paradigm. This paper studies GPT-4 as a reader-annotator of 21 specific appraisal ratings in different prompt settings, aiming to evaluate and improve its performance compared to human annotators. We found that GPT-4 is an effective reader-annotator that performs close to or even slightly better than human annotators, and its results can be significantly improved by using a majority voting of five completions. GPT-4 also effectively predicts appraisal ratings and emotion labels using a single prompt, but adding instruction complexity results in poorer performance. We also found that longer event descriptions lead to more accurate annotations for both model and human annotator ratings. This work contributes to the growing usage of LLMs in psychology and the strategies for improving GPT-4 performance in annotating appraisals.
Problem

Research questions and friction points this paper is trying to address.

Evaluating GPT-4's accuracy in emotion appraisal annotation
Comparing GPT-4 performance to human annotators
Improving annotation reliability via majority voting strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-4 as reader-annotator for appraisal ratings
Majority voting improves annotation accuracy
Single prompt predicts ratings and labels
🔎 Similar Papers
No similar papers found.
D
Deniss Ruder
Institute of Computer Science, University of Tartu
A
A. Uusberg
Institute of Psychology, University of Tartu
Kairit Sirts
Kairit Sirts
University of Tartu
Natural Language ProcessingComputational LinguisticsComputational Psychology#unitartucs