🤖 AI Summary
This work addresses the high cost of manual annotation in complex aspect-based sentiment analysis (ABSA) tasks such as aspect sentiment quadruple prediction (ASQP). To mitigate this, the authors propose LA-ABSA, a framework that systematically leverages large language models (LLMs) as annotators, guided by only a few human-provided examples through in-context learning to generate high-quality labeled data for fine-tuning lightweight downstream models. This approach substantially reduces reliance on both extensive human annotation and expensive LLM inference. Evaluated on five benchmark datasets—including SemEval Rest16—the method achieves strong performance, attaining an F1 score of 49.85 on ASQP, which closely approaches the in-context learning performance of Gemma-3-27B (51.10) while significantly lowering computational overhead.
📝 Abstract
Training models for Aspect-Based Sentiment Analysis (ABSA) tasks requires manually annotated data, which is expensive and time-consuming to obtain. This paper introduces LA-ABSA, a novel approach that leverages Large Language Model (LLM)-generated annotations to fine-tune lightweight models for complex ABSA tasks. We evaluate our approach on five datasets for Target Aspect Sentiment Detection (TASD) and Aspect Sentiment Quad Prediction (ASQP). Our approach outperformed previously reported augmentation strategies and achieved competitive performance with LLM-prompting in low-resource scenarios, while providing substantial energy efficiency benefits. For example, using 50 annotated examples for in-context learning (ICL) to guide the annotation of unlabeled data, LA-ABSA achieved an F1 score of 49.85 for ASQP on the SemEval Rest16 dataset, closely matching the performance of ICL prompting with Gemma-3-27B (51.10), while requiring significantly lower computational resources.