🤖 AI Summary
This work addresses the exposure bias and error propagation issues in aspect sentiment quadruple prediction (ASQP) caused by linearized decoding orders. To mitigate these problems, the authors propose a Generate-then-Correct (G2C) framework, which first employs a generator to produce an initial sequence of quadruples and then applies a corrector to perform a single global refinement over the entire sequence. By introducing a two-stage, one-pass correction mechanism, G2C eliminates reliance on decoding order and alleviates the train-inference discrepancy. The corrector is trained on synthetic erroneous samples generated by large language models, effectively integrating sequence generation with sequence-level global correction. Experimental results demonstrate that G2C significantly outperforms strong baselines on the Rest15 and Rest16 datasets.
📝 Abstract
Aspect-based sentiment analysis (ABSA) extracts aspect-level sentiment signals from user-generated text, supports product analytics, experience monitoring, and public-opinion tracking, and is central to fine-grained opinion mining. A key challenge in ABSA is aspect sentiment quad prediction (ASQP), which requires identifying four elements: the aspect term, the aspect category, the opinion term, and the sentiment polarity. However, existing studies usually linearize the unordered quad set into a fixed-order template and decode it left-to-right. With teacher forcing training, the resulting training-inference mismatch (exposure bias) lets early prefix errors propagate to later elements. The linearization order determines which elements appear earlier in the prefix, so this propagation becomes order-sensitive and is hard to repair in a single pass. To address this, we propose a method, Generate-then-Correct (G2C): a generator drafts quads and a corrector performs a single-shot, sequence-level global correction trained on LLM-synthesized drafts with common error patterns. On the Rest15 and Rest16 datasets, G2C outperforms strong baseline models.