Creation of the Estonian Subjectivity Dataset: Assessing the Degree of Subjectivity on a Scale

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of document-level subjectivity assessment resources for the low-resource language Estonian. We construct the first fine-grained, continuous-scale (0–100) subjectivity annotation dataset comprising 1,000 documents (300 news articles + 700 web texts), independently rated by four annotators; high-disagreement samples underwent re-annotation to enhance inter-annotator consistency. Annotation reliability was validated via multi-annotator correlation analyses (e.g., Pearson and Spearman coefficients). We further conduct the first empirical investigation of GPT-5’s capability in automated subjectivity scoring, revealing that while it generates plausible scores, it exhibits systematic biases and cannot replace human annotation. Our contributions are threefold: (1) establishing the first Estonian subjectivity benchmark; (2) proposing a continuous, document-level subjectivity annotation paradigm adaptable to low-resource languages; and (3) providing empirical evidence and critical boundary awareness regarding LLM-assisted subjectivity evaluation.

Technology Category

Application Category

📝 Abstract
This article presents the creation of an Estonian-language dataset for document-level subjectivity, analyzes the resulting annotations, and reports an initial experiment of automatic subjectivity analysis using a large language model (LLM). The dataset comprises of 1,000 documents-300 journalistic articles and 700 randomly selected web texts-each rated for subjectivity on a continuous scale from 0 (fully objective) to 100 (fully subjective) by four annotators. As the inter-annotator correlations were moderate, with some texts receiving scores at the opposite ends of the scale, a subset of texts with the most divergent scores was re-annotated, with the inter-annotator correlation improving. In addition to human annotations, the dataset includes scores generated by GPT-5 as an experiment on annotation automation. These scores were similar to human annotators, however several differences emerged, suggesting that while LLM based automatic subjectivity scoring is feasible, it is not an interchangeable alternative to human annotation, and its suitability depends on the intended application.
Problem

Research questions and friction points this paper is trying to address.

Creating an Estonian dataset for document-level subjectivity analysis
Assessing subjectivity on a continuous scale using human annotations
Evaluating LLM-based automatic scoring versus human annotation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created Estonian subjectivity dataset with continuous scale ratings
Used GPT-5 for automated annotation as an experimental approach
Re-annotated divergent scores to improve inter-annotator correlation
🔎 Similar Papers
No similar papers found.