🤖 AI Summary
Existing aspect-based sentiment analysis (ABSA) methods rely on linear dot-product operations to model semantic relationships, limiting their capacity to capture nonlinear dependencies and rendering them susceptible to noise from irrelevant tokens—leading to inaccurate opinion term localization and biased sentiment polarity prediction. To address these limitations, we propose the Syntax-Semantic Collaborative Graph Network (SSCGN), a dual-channel graph neural network that innovatively integrates syntactic structure awareness with optimal transport–driven semantic alignment attention. Additionally, we introduce an adaptive feature fusion module and a contrastive regularization strategy to enhance model robustness and fine-grained sentiment modeling. Experimental results demonstrate state-of-the-art performance, achieving absolute F1-score improvements of 1.01% on Twitter and 1.30% on Laptop14. Ablation studies and visualization analyses further validate SSCGN’s superior opinion term localization accuracy and noise resilience.
📝 Abstract
Aspect-based sentiment analysis (ABSA) aims to identify aspect terms and determine their sentiment polarity. While dependency trees combined with contextual semantics effectively identify aspect sentiment, existing methods relying on syntax trees and aspect-aware attention struggle to model complex semantic relationships. Their dependence on linear dot-product features fails to capture nonlinear associations, allowing noisy similarity from irrelevant words to obscure key opinion terms. Motivated by Differentiable Optimal Matching, we propose the Optimal Transport Enhanced Syntactic-Semantic Graph Network (OTESGN), which introduces a Syntactic-Semantic Collaborative Attention. It comprises a Syntactic Graph-Aware Attention for mining latent syntactic dependencies and modeling global syntactic topology, as well as a Semantic Optimal Transport Attention designed to uncover fine-grained semantic alignments amidst textual noise, thereby accurately capturing sentiment signals obscured by irrelevant tokens. A Adaptive Attention Fusion module integrates these heterogeneous features, and contrastive regularization further improves robustness. Experiments demonstrate that OTESGN achieves state-of-the-art results, outperforming previous best models by +1.01% F1 on Twitter and +1.30% F1 on Laptop14 benchmarks. Ablative studies and visual analyses corroborate its efficacy in precise localization of opinion words and noise resistance.