🤖 AI Summary
This work addresses the issue of insufficient prediction reliability in dimensional aspect-based sentiment analysis by proposing a Self-Consistent Structured Generation (SCSG) mechanism. The approach leverages multiple inference passes with a LoRA-finetuned Gemma 3 large language model, retaining only sentiment triplets that achieve majority agreement across runs. To balance consistency and computational efficiency, it integrates vLLM’s PagedAttention for key-value cache reuse during decoding. Evaluated across six languages and eight language-domain combinations, the method significantly outperforms single-pass baselines, consistently ranking among the top seven overall. Notably, it achieves second place on three English tasks and secures first place in the Tatar restaurant domain.
📝 Abstract
We present Self-Consistent Structured Generation (SCSG) for Dimensional Aspect-Based Sentiment Analysis in SemEval-2026 Task 3 (Track A). SCSG enhances prediction reliability by executing a LoRA-adapted large language model multiple times per instance, retaining only tuples that achieve a majority consensus across runs. To mitigate the computational overhead of multiple forward passes, we leverage vLLM's PagedAttention mechanism for efficient key--value cache reuse. Evaluation across 6 languages and 8 language--domain combinations demonstrates that self-consistency with 15 executions yields statistically significant improvements over single-inference prompting, with our system (leveraging Gemma 3) ranking in the top seven across all settings, achieving second place on three out of four English subsets and first place on Tatar-Restaurant for DimASTE.