🤖 AI Summary
This work proposes XCom, a novel model designed to address the limited interpretability of Transformer-based comparative opinion mining systems, which often undermines user trust. By integrating aspect-level sentiment scoring with a semantic reasoning module and incorporating Shapley Additive Explanations (SHAP), XCom renders its decision-making process transparent. The model maintains high performance in comparative opinion analysis while achieving state-of-the-art results across multiple benchmark datasets. Furthermore, it delivers intuitive and reliable interpretability outputs that significantly enhance users’ understanding of and confidence in the model’s predictions.
📝 Abstract
Comparative opinion mining involves comparing products from different reviews. However, transformer-based models designed for this task often lack transparency, which can adversely hinder the development of trust in users. In this paper, we propose XCom, an enhanced transformer-based model separated into two principal modules, i.e., (i) aspect-based rating prediction and (ii) semantic analysis for comparative opinion mining. XCom also incorporates a Shapley additive explanations module to provide interpretable insights into the model's deliberative decisions. Empirically, XCom achieves leading performances compared to other baselines, which demonstrates its effectiveness in providing meaningful explanations, making it a more reliable tool for comparative opinion mining. Source code is available at: https://anonymous.4open.science/r/XCom.