🤖 AI Summary
This work addresses the lack of interpretable semantics in recommendation system embeddings. We propose a prediction-aware sparse autoencoder (SAE) that extracts monosemantic neurons—neurons exhibiting unambiguous, human-interpretable semantics—from frozen pre-trained user/item embeddings, aligning them with downstream recommendation predictions. Our method leverages gradient signals from a fixed, off-the-shelf recommender model to jointly optimize the SAE, enabling post-hoc, controllable intervention without modifying the original model. Extracted neurons explicitly correspond to interpretable concepts such as item category, popularity, and temporal trends. Extensive experiments across diverse architectures (e.g., MF, LightGCN, SASRec) and datasets (Amazon, Yelp, ML-1M) demonstrate cross-architectural generalizability. Crucially, this is the first approach to tightly couple monosemanticity constraints with the primary prediction objective, achieving high-fidelity semantic disentanglement while preserving essential user–item interaction structure. Our framework establishes a new paradigm for interpretable analysis and controllable editing of recommendation models.
📝 Abstract
We present a method for extracting emph{monosemantic} neurons, defined as latent dimensions that align with coherent and interpretable concepts, from user and item embeddings in recommender systems. Our approach employs a Sparse Autoencoder (SAE) to reveal semantic structure within pretrained representations. In contrast to work on language models, monosemanticity in recommendation must preserve the interactions between separate user and item embeddings. To achieve this, we introduce a emph{prediction aware} training objective that backpropagates through a frozen recommender and aligns the learned latent structure with the model's user-item affinity predictions. The resulting neurons capture properties such as genre, popularity, and temporal trends, and support post hoc control operations including targeted filtering and content promotion without modifying the base model. Our method generalizes across different recommendation models and datasets, providing a practical tool for interpretable and controllable personalization. Code and evaluation resources are available at https://github.com/DeltaLabTLV/Monosemanticity4Rec.