🤖 AI Summary
This study addresses the low credibility of counterfactual explanations (CEs) in recommender systems. We propose a high-credibility CE generation framework that jointly optimizes counterfactual reasoning and the underlying recommendation model. Our method enforces constraints on the perturbation space, incorporates causal plausibility constraints, and applies interpretability regularization to produce semantically coherent and user-acceptable alternative scenarios. Extensive numerical evaluations on multiple public benchmarks demonstrate significant improvements over state-of-the-art baselines: +23.6% in explanation plausibility, +18.4% in user acceptance rate, and enhanced recommendation fidelity. A user study further confirms that our approach substantially improves users’ depth of understanding and trust in recommendations. This work establishes a new paradigm for explainable recommendation that rigorously integrates causal reasoning with human-centered design principles.
📝 Abstract
Explanations play a variety of roles in various recommender systems, from a legally mandated afterthought, through an integral element of user experience, to a key to persuasiveness. A natural and useful form of an explanation is the Counterfactual Explanation (CE). We present a method for generating highly plausible CEs in recommender systems and evaluate it both numerically and with a user study.