Are Representation Disentanglement and Interpretability Linked in Recommendation Models? A Critical Review and Reproducibility Study

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The relationship between representation disentanglement, recommendation accuracy, and interpretability remains poorly understood. Method: We systematically reproduce and quantitatively evaluate the disentanglement (measured via mutual information, DCI, and SAP), interpretability (assessed using LIME/SHAP attribution scores and user studies), and recommendation performance (Recall@20/NDCG@20) of five mainstream models—including BPR and LightGCN—across four public benchmarks. Contribution/Results: Through large-scale, reproducible experiments, we find—contrary to common assumptions—that disentanglement exhibits no statistically significant correlation with recommendation accuracy (p > 0.05), yet shows a strong positive correlation with interpretability (p < 0.01). Based on this finding, we propose the first unified evaluation framework jointly characterizing disentanglement, interpretability, and performance for recommender systems. All code, datasets, and experimental results are publicly released to advance reproducible research in interpretable recommendation.

Technology Category

Application Category

📝 Abstract
Unsupervised learning of disentangled representations has been closely tied to enhancing the representation intepretability of Recommender Systems (RSs). This has been achieved by making the representation of individual features more distinctly separated, so that it is easier to attribute the contribution of features to the model's predictions. However, such advantages in interpretability and feature attribution have mainly been explored qualitatively. Moreover, the effect of disentanglement on the model's recommendation performance has been largely overlooked. In this work, we reproduce the recommendation performance, representation disentanglement and representation interpretability of five well-known recommendation models on four RS datasets. We quantify disentanglement and investigate the link of disentanglement with recommendation effectiveness and representation interpretability. While several existing work in RSs have proposed disentangled representations as a gateway to improved effectiveness and interpretability, our findings show that disentanglement is not necessarily related to effectiveness but is closely related to representation interpretability. Our code and results are publicly available at https://github.com/edervishaj/disentanglement-interpretability-recsys.
Problem

Research questions and friction points this paper is trying to address.

Disentangled Representations
Recommendation Systems
Interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled Representations
Recommendation Systems
Interpretability
🔎 Similar Papers
No similar papers found.