🤖 AI Summary
Knowledge graph-aware recommendation systems (KGRSs) suffer significant performance degradation under sparse user-item interactions, and complex GNN architectures—especially those incorporating attention mechanisms—further exacerbate learning difficulties. To address this, we propose LightKG, a lightweight GNN-based framework. Methodologically, LightKG abandons sophisticated relation embeddings and attention mechanisms, instead employing scalar relation encoding and linear neighborhood aggregation. Moreover, it introduces the first inductive contrastive learning strategy directly on the original knowledge graph to enable efficient self-supervised training. Extensive experiments demonstrate that LightKG achieves an average 5.8% improvement in recommendation accuracy across four benchmark datasets, reduces training time by 84.3% compared to SSL-based baselines, and consistently outperforms twelve state-of-the-art methods under both sparse and dense interaction scenarios.
📝 Abstract
Recently, Graph Neural Networks (GNNs) have become the dominant approach for Knowledge Graph-aware Recommender Systems (KGRSs) due to their proven effectiveness. Building upon GNN-based KGRSs, Self-Supervised Learning (SSL) has been incorporated to address the sparity issue, leading to longer training time. However, through extensive experiments, we reveal that: (1)compared to other KGRSs, the existing GNN-based KGRSs fail to keep their superior performance under sparse interactions even with SSL. (2) More complex models tend to perform worse in sparse interaction scenarios and complex mechanisms, like attention mechanism, can be detrimental as they often increase learning difficulty. Inspired by these findings, we propose LightKG, a simple yet powerful GNN-based KGRS to address sparsity issues. LightKG includes a simplified GNN layer that encodes directed relations as scalar pairs rather than dense embeddings and employs a linear aggregation framework, greatly reducing the complexity of GNNs. Additionally, LightKG incorporates an efficient contrastive layer to implement SSL. It directly minimizes the node similarity in original graph, avoiding the time-consuming subgraph generation and comparison required in previous SSL methods. Experiments on four benchmark datasets show that LightKG outperforms 12 competitive KGRSs in both sparse and dense scenarios while significantly reducing training time. Specifically, it surpasses the best baselines by an average of 5.8% in recommendation accuracy and saves 84.3% of training time compared to KGRSs with SSL. Our code is available at https://github.com/1371149/LightKG.