🤖 AI Summary
Insufficient explanation fidelity in recommender systems—existing methods fail to accurately reflect the model’s true reasoning process, particularly exhibiting instability under sparse implicit feedback. This paper proposes SPINRec, a fidelity-oriented explanation framework grounded in stochastic path integration. Its core innovations include: (i) an empirical-distribution-driven stochastic baseline sampling strategy that replaces fixed baselines, enabling personalized and robust explanation paths; and (ii) the first unified fidelity evaluation protocol across multiple models (MF, VAE, NCF) and datasets, integrating AUC-based perturbation curves with fixed-length diagnostic metrics. Extensive experiments demonstrate that SPINRec significantly outperforms state-of-the-art methods on three benchmark datasets, establishing a new standard for recommendation explainability. The implementation is publicly available.
📝 Abstract
Explanation fidelity, which measures how accurately an explanation reflects a model's true reasoning, remains critically underexplored in recommender systems. We introduce SPINRec (Stochastic Path Integration for Neural Recommender Explanations), a model-agnostic approach that adapts path-integration techniques to the sparse and implicit nature of recommendation data. To overcome the limitations of prior methods, SPINRec employs stochastic baseline sampling: instead of integrating from a fixed or unrealistic baseline, it samples multiple plausible user profiles from the empirical data distribution and selects the most faithful attribution path. This design captures the influence of both observed and unobserved interactions, yielding more stable and personalized explanations. We conduct the most comprehensive fidelity evaluation to date across three models (MF, VAE, NCF), three datasets (ML1M, Yahoo! Music, Pinterest), and a suite of counterfactual metrics, including AUC-based perturbation curves and fixed-length diagnostics. SPINRec consistently outperforms all baselines, establishing a new benchmark for faithful explainability in recommendation. Code and evaluation tools are publicly available at https://github.com/DeltaLabTLV/SPINRec.