🤖 AI Summary
This work addresses the challenge of extracting interpretable and controllable recommendation features solely from user interaction data to enable targeted guidance in recommender systems. To this end, we propose integrating a sparse autoencoder (SAE) between the encoder and decoder of a collaborative filtering autoencoder (CFAE)—a novel application of SAE in collaborative filtering—to learn latent features with single-semantic meaning and establish explicit mappings between individual neurons and human-interpretable semantic concepts. By selectively activating specific neurons, the model enables precise control over the direction of generated recommendations. Experimental results demonstrate that the proposed approach effectively extracts highly interpretable features and supports flexible, fine-grained manipulation of recommendation outcomes.
📝 Abstract
Sparse autoencoders (SAEs) have recently emerged as pivotal tools for introspection into large language models. SAEs can uncover high-quality, interpretable features at different levels of granularity and enable targeted steering of the generation process by selectively activating specific neurons in their latent activations. Our paper is the first to apply this approach to collaborative filtering, aiming to extract similarly interpretable features from representations learned purely from interaction signals. In particular, we focus on a widely adopted class of collaborative autoencoders (CFAEs) and augment them by inserting an SAE between their encoder and decoder networks. We demonstrate that such representation is largely monosemantic and propose suitable mapping functions between semantic concepts and individual neurons. We also evaluate a simple yet effective method that utilizes this representation to steer the recommendations in a desired direction.