🤖 AI Summary
Existing concept extraction methods for large language models (LLMs) lack theoretical grounding, making it difficult to establish clear correspondences between internal representations and human-interpretable concepts. This work proposes ConCA, a novel framework that, for the first time, models LLM activations as linear mixtures of context-dependent concept log-posteriors within a latent variable model. By incorporating unsupervised linear unmixing with sparsity-inducing priors, ConCA enables principled and interpretable concept discovery. We develop twelve sparse variants of ConCA and demonstrate their ability to recover semantically coherent concepts across multiple mainstream LLMs, significantly outperforming existing approaches such as sparse autoencoders. The framework thus achieves both theoretical rigor and strong empirical performance.
📝 Abstract
Developing human understandable interpretation of large language models (LLMs) becomes increasingly critical for their deployment in essential domains. Mechanistic interpretability seeks to mitigate the issues through extracts human-interpretable process and concepts from LLMs'activations. Sparse autoencoders (SAEs) have emerged as a popular approach for extracting interpretable and monosemantic concepts by decomposing the LLM internal representations into a dictionary. Despite their empirical progress, SAEs suffer from a fundamental theoretical ambiguity: the well-defined correspondence between LLM representations and human-interpretable concepts remains unclear. This lack of theoretical grounding gives rise to several methodological challenges, including difficulties in principled method design and evaluation criteria. In this work, we show that, under mild assumptions, LLM representations can be approximated as a {linear mixture} of the log-posteriors over concepts given the input context, through the lens of a latent variable model where concepts are treated as latent variables. This motivates a principled framework for concept extraction, namely Concept Component Analysis (ConCA), which aims to recover the log-posterior of each concept from LLM representations through a {unsupervised} linear unmixing process. We explore a specific variant, termed sparse ConCA, which leverages a sparsity prior to address the inherent ill-posedness of the unmixing problem. We implement 12 sparse ConCA variants and demonstrate their ability to extract meaningful concepts across multiple LLMs, offering theory-backed advantages over SAEs.