🤖 AI Summary
This work addresses the challenge of spurious correlations in multi-behavior recommendation systems caused by confounding factors, a problem exacerbated by existing explainable methods that rely on external information and exhibit limited generalization. To this end, the paper introduces the first integration of causal inference into a neural-symbolic framework, modeling the endogenous logic of user behavior chains to construct an interpretable causal mediator for deconfounding. By combining hierarchical preference propagation with adaptive neuro-logical reasoning paths—such as conjunction and disjunction—the approach achieves multi-level interpretability spanning both model architecture and recommendation outputs. Evaluated on three large-scale datasets, the proposed method significantly outperforms state-of-the-art baselines, simultaneously enhancing recommendation accuracy and mitigating confounding bias without requiring any external information.
📝 Abstract
Existing multi-behavior recommendations tend to prioritize performance at the expense of explainability, while current explainable methods suffer from limited generalizability due to their reliance on external information. Neuro-Symbolic integration offers a promising avenue for explainability by combining neural networks with symbolic logic rule reasoning. Concurrently, we posit that user behavior chains inherently embody an endogenous logic suitable for explicit reasoning. However, these observational multiple behaviors are plagued by confounders, causing models to learn spurious correlations. By incorporating causal inference into this Neuro-Symbolic framework, we propose a novel Causal Neuro-Symbolic Reasoning model for Explainable Multi-Behavior Recommendation (CNRE). CNRE operationalizes the endogenous logic by simulating a human-like decision-making process. Specifically, CNRE first employs hierarchical preference propagation to capture heterogeneous cross-behavior dependencies. Subsequently, it models the endogenous logic rule implicit in the user's behavior chain based on preference strength, and adaptively dispatches to the corresponding neural-logic reasoning path (e.g., conjunction, disjunction). This process generates an explainable causal mediator that approximates an ideal state isolated from confounding effects. Extensive experiments on three large-scale datasets demonstrate CNRE's significant superiority over state-of-the-art baselines, offering multi-level explainability from model design and decision process to recommendation results.