Sustainable Transparency in Recommender Systems: Bayesian Ranking of Images for Explainability

📅 2023-07-27
🏛️ Information Fusion
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the low transparency and weak user trust stemming from insufficient explainability in recommender systems, this paper proposes the first image-granularity Bayesian ranking explainability framework. Our method directly models user-generated visual content as an explanation source, leveraging Bayesian probabilistic modeling, uncertainty-aware image embedding ranking, and lightweight Monte Carlo approximate inference to generate high-fidelity explanations while substantially reducing computational overhead. Unlike conventional post-hoc explanation methods, our framework jointly ensures explanation reliability, stability, and sustainability. Extensive experiments on multiple visual recommendation benchmarks demonstrate that our approach improves explanation fidelity by 23.6%, enhances user trust by 31.4%, and reduces inference energy consumption by 58%.
Problem

Research questions and friction points this paper is trying to address.

Enhance transparency and trust in Recommender Systems.
Optimize image ranking for personalized explanations.
Reduce computational costs and CO2 emissions in model training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian Pairwise Ranking for image explanations
Reduces model size by up to 64 times
Cuts CO2 emissions by up to 75%