The Blessing of Reasoning: LLM-Based Contrastive Explanations in Black-Box Recommender Systems

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental trade-off between interpretability and accuracy in black-box recommender systems. We propose LR-Recsys, the first framework that synergistically integrates the intrinsic reasoning capabilities of large language models (LLMs) with deep neural networks (DNNs) without requiring external knowledge injection—enabling end-to-end generation of contrastive natural-language explanations (positive vs. negative rationale) directly embedded within the recommendation prediction pipeline. We theoretically establish that LR-Recsys achieves learning efficiency gains under high-dimensional, multi-environment settings. Empirically, it outperforms state-of-the-art baselines by 3–14% across three real-world datasets. Moreover, the generated explanations are actionable: user studies confirm significant improvements in user trust, platform controllability, and merchant item-selection decision support.

Technology Category

Application Category

📝 Abstract
Modern recommender systems use ML models to predict consumer preferences from consumption history. Although these"black-box"models achieve impressive predictive performance, they often suffer from a lack of transparency and explainability. Contrary to the presumed tradeoff between explainability and accuracy, we show that integrating large language models (LLMs) with deep neural networks (DNNs) can improve both. We propose LR-Recsys, which augments DNN-based systems with LLM reasoning capabilities. LR-Recsys introduces a contrastive-explanation generator that produces human-readable positive explanations and negative explanations. These explanations are embedded via a fine-tuned autoencoder and combined with consumer and product features to improve predictions. Beyond offering explainability, we show that LR-Recsys also improves learning efficiency and predictive accuracy, as supported by high-dimensional, multi-environment statistical learning theory. LR-Recsys outperforms state-of-the-art recommender systems by 3-14% on three real-world datasets. Importantly, our analysis reveals that these gains primarily derive from LLMs' reasoning capabilities rather than their external domain knowledge. LR-RecSys presents an effective approach to combine LLMs with traditional DNNs, two of the most widely used ML models today. The explanations generated by LR-Recsys provide actionable insights for consumers, sellers, and platforms, helping to build trust, optimize product offerings, and inform targeting strategies.
Problem

Research questions and friction points this paper is trying to address.

Enhance transparency in black-box recommender systems
Integrate LLMs with DNNs for better predictions
Generate contrastive explanations to improve user trust
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs with DNNs for explainability
Uses contrastive explanations to enhance predictions
Improves accuracy and learning efficiency simultaneously
🔎 Similar Papers
No similar papers found.