🤖 AI Summary
To address the low transparency and weak user trust stemming from insufficient explainability in recommender systems, this paper proposes the first image-granularity Bayesian ranking explainability framework. Our method directly models user-generated visual content as an explanation source, leveraging Bayesian probabilistic modeling, uncertainty-aware image embedding ranking, and lightweight Monte Carlo approximate inference to generate high-fidelity explanations while substantially reducing computational overhead. Unlike conventional post-hoc explanation methods, our framework jointly ensures explanation reliability, stability, and sustainability. Extensive experiments on multiple visual recommendation benchmarks demonstrate that our approach improves explanation fidelity by 23.6%, enhances user trust by 31.4%, and reduces inference energy consumption by 58%.