Large Language Models as Automatic Annotators and Annotation Adjudicators for Fine-Grained Opinion Analysis

๐Ÿ“… 2026-01-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of acquiring high-quality annotated data for fine-grained opinion analysis tasksโ€”such as Aspect Sentiment Triplet Extraction (ASTE) and Aspect-Category Opinion-Sentiment (ACOS)โ€”which are hindered by high annotation costs and substantial human effort, particularly in multi-domain settings. To mitigate these limitations, the authors propose an automated labeling and arbitration framework grounded in large language models (LLMs), integrated with a declarative annotation pipeline. This approach significantly reduces inconsistencies arising from manual prompt engineering while achieving high inter-annotator agreement on both ASTE and ACOS tasks. By minimizing reliance on human annotators and lowering data construction costs, the method enhances the reliability and scalability of cross-model annotations, thereby facilitating broader practical deployment across diverse domains.

Technology Category

Application Category

๐Ÿ“ Abstract
Fine-grained opinion analysis of text provides a detailed understanding of expressed sentiments, including the addressed entity. Although this level of detail is sound, it requires considerable human effort and substantial cost to annotate opinions in datasets for training models, especially across diverse domains and real-world applications. We explore the feasibility of LLMs as automatic annotators for fine-grained opinion analysis, addressing the shortage of domain-specific labelled datasets. In this work, we use a declarative annotation pipeline. This approach reduces the variability of manual prompt engineering when using LLMs to identify fine-grained opinion spans in text. We also present a novel methodology for an LLM to adjudicate multiple labels and produce final annotations. After trialling the pipeline with models of different sizes for the Aspect Sentiment Triplet Extraction (ASTE) and Aspect-Category-Opinion-Sentiment (ACOS) analysis tasks, we show that LLMs can serve as automatic annotators and adjudicators, achieving high Inter-Annotator Agreement across individual LLM-based annotators. This reduces the cost and human effort needed to create these fine-grained opinion-annotated datasets.
Problem

Research questions and friction points this paper is trying to address.

fine-grained opinion analysis
annotation cost
domain-specific datasets
human annotation effort
labelled data scarcity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Automatic Annotation
Annotation Adjudication
Fine-Grained Opinion Analysis
Declarative Annotation Pipeline
๐Ÿ”Ž Similar Papers
No similar papers found.
G
Gaurav Negi
Data Science Institute, University of Galway, Ireland
M
MA Waskow
Data Science Institute, University of Galway, Ireland
Paul Buitelaar
Paul Buitelaar
Professor in Data Analytics, Data Science Institute, Univ of Galway, Co-PI Insight Centre
Natural Language ProcessingKnowledge GraphsText MiningSemantics