🤖 AI Summary
Existing Czech ABSA datasets support only aspect term extraction or polarity classification, lacking unified annotations for joint target–aspect–category detection. To address this gap, we introduce the first Czech fine-grained ABSA dataset conforming to the SemEval-2016 format, comprising 3.1K manually double-annotated restaurant reviews with 90% inter-annotator agreement—enabling standardized modeling of complex Czech ABSA tasks for the first time. We further release 24 million unlabeled Czech reviews to facilitate unsupervised learning. Extensive experiments with Transformer-based models establish strong baselines across all subtasks. All resources—including code, annotated data, and detailed annotation guidelines—are publicly available under a non-commercial academic license. This work significantly advances cross-lingual ABSA benchmarking and low-resource sentiment analysis, particularly for morphologically rich languages like Czech.
📝 Abstract
In this paper, we introduce a novel Czech dataset for aspect-based sentiment analysis (ABSA), which consists of 3.1K manually annotated reviews from the restaurant domain. The dataset is built upon the older Czech dataset, which contained only separate labels for the basic ABSA tasks such as aspect term extraction or aspect polarity detection. Unlike its predecessor, our new dataset is specifically designed for more complex tasks, e.g. target-aspect-category detection. These advanced tasks require a unified annotation format, seamlessly linking sentiment elements (labels) together. Our dataset follows the format of the well-known SemEval-2016 datasets. This design choice allows effortless application and evaluation in cross-lingual scenarios, ultimately fostering cross-language comparisons with equivalent counterpart datasets in other languages. The annotation process engaged two trained annotators, yielding an impressive inter-annotator agreement rate of approximately 90%. Additionally, we provide 24M reviews without annotations suitable for unsupervised learning. We present robust monolingual baseline results achieved with various Transformer-based models and insightful error analysis to supplement our contributions. Our code and dataset are freely available for non-commercial research purposes.