A Logical-Rule Autoencoder for Interpretable Recommendations

📅 2026-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited interpretability of deep recommender systems stemming from their black-box nature by proposing the Logic-based Interpretable Autoencoder (LIA). LIA incorporates a learnable logic rule layer that automatically selects AND/OR operations during training and efficiently encodes negations through weight signs, thereby extracting human-readable recommendation rules directly from data. Built upon an intrinsically interpretable autoencoder architecture, the method achieves functionally complete logical expressiveness without expanding input dimensions and enables end-to-end learning of explicit, traceable decision logic. Experimental results demonstrate that LIA not only matches or exceeds the recommendation performance of conventional baselines but also preserves full interpretability while offering intuitive, user-level decision pathways.
📝 Abstract
Most deep learning recommendation models operate as black boxes, relying on latent representations that obscure their decision process. This lack of intrinsic interpretability raises concerns in applications that require transparency and accountability. In this work, we propose a Logical-rule Interpretable Autoencoder (LIA) for collaborative filtering that is interpretable by design. LIA introduces a learnable logical rule layer in which each rule neuron is equipped with a gate parameter that automatically selects between AND and OR operators during training, enabling the model to discover diverse logical patterns directly from data. To support functional completeness without doubling the input dimensionality, LIA encodes negation through the sign of connection weights, providing a parameter-efficient mechanism for expressing both positive and negated item conditions within each rule. By learning explicit, human-readable reconstruction rules, LIA allows users to directly trace the decision process behind each recommendation. Extensive experiments show that our method achieves improved recommendation performance over traditional baselines while remaining fully interpretable. Code and data are available at https://github.com/weibowen555/LIA.
Problem

Research questions and friction points this paper is trying to address.

interpretability
recommendation systems
black-box models
logical rules
collaborative filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

logical-rule autoencoder
interpretable recommendation
learnable logical gates
negation encoding
collaborative filtering
🔎 Similar Papers
No similar papers found.