How Annotation Trains Annotators: Competence Development in Social Influence Recognition

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the neglect of annotators’ dynamic competence development in subjective annotation tasks by framing the annotation process itself as a skill-building mechanism, using social influence identification as a case study. Tracking the performance of 25 expert and non-expert annotators across 1,021 dialogue segments, the research integrates pre- and post-annotation comparisons, semi-structured interviews, self-assessment questionnaires, and large language model (LLM) training and evaluation based on the annotated data. Results reveal that active annotation significantly enhances annotators’ self-perceived competence and confidence, with experts exhibiting more pronounced gains. Crucially, the performance trajectory of LLMs trained on their annotations effectively mirrors this improvement, thereby validating the positive impact of the annotation process on downstream model efficacy.
📝 Abstract
Human data annotation, especially when involving experts, is often treated as an objective reference. However, many annotation tasks are inherently subjective, and annotators' judgments may evolve over time. This study investigates changes in the quality of annotators' work from a competence perspective during a process of social influence recognition. The study involved 25 annotators from five different groups, including both experts and non-experts, who annotated a dataset of 1,021 dialogues with 20 social influence techniques, along with intentions, reactions, and consequences. An initial subset of 150 texts was annotated twice - before and after the main annotation process - to enable comparison. To measure competence shifts, we combined qualitative and quantitative analyses of the annotated data, semi-structured interviews with annotators, self-assessment surveys, and Large Language Model training and evaluation on the comparison dataset. The results indicate a significant increase in annotators' self-perceived competence and confidence. Moreover, observed changes in data quality suggest that the annotation process may enhance annotator competence and that this effect is more pronounced in expert groups. The observed shifts in annotator competence have a visible impact on the performance of LLMs trained on their annotated data.
Problem

Research questions and friction points this paper is trying to address.

annotation competence
social influence recognition
subjective annotation
annotator development
data quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

annotator competence
social influence recognition
annotation dynamics
large language models
expertise development
🔎 Similar Papers
No similar papers found.
M
Maciej Markiewicz
Wrocław University of Science and Technology, Wrocław, Poland
B
Beata Bajcar
Wrocław University of Science and Technology, Wrocław, Poland
Wiktoria Mieleszczenko-Kowszewicz
Wiktoria Mieleszczenko-Kowszewicz
Badaczka, Politechnika Warszawska
psychologiapsycholingwistykaLLMAI
A
Aleksander Szczęsny
Wrocław University of Science and Technology, Wrocław, Poland
T
Tomasz Adamczyk
Wrocław University of Science and Technology, Wrocław, Poland
G
Grzegorz Chodak
Wrocław University of Science and Technology, Wrocław, Poland
K
Karolina Ostrowska
University of Silesia in Katowice, Katowice, Poland
A
Aleksandra Sawczuk
University of Silesia in Katowice, Katowice, Poland
J
Jolanta Babiak
Wrocław University of Science and Technology, Wrocław, Poland
J
Jagoda Szklarczyk
SWPS University, Wrocław, Poland
Przemysław Kazienko
Przemysław Kazienko
Politechnika Wrocławska
NLPaffective computingwearablesmachine learningsocial networks