🤖 AI Summary
Traditional topic models are constrained by bag-of-words representations and lexical, list-based topic formulations, limiting their capacity to capture semantic abstraction and complex conceptual structures. To address this, we propose Mechanistic Topic Models (MTMs), the first topic modeling framework grounded in high-dimensional, interpretable neural semantic features extracted from large language models (LLMs) via sparse autoencoders; topics are defined as patterns of feature activations, thereby transcending lexical representation constraints. Our method enables controllable text generation guided by topic-oriented vectors and introduces Topic Judge—a novel LLM-driven pairwise comparative evaluation framework for automated, semantics-aware assessment of topic quality. Experiments across five benchmark datasets demonstrate that MTMs achieve state-of-the-art or superior topic coherence, receive strong preference in Topic Judge evaluations, and effectively steer LLMs toward semantically faithful content generation.
📝 Abstract
Traditional topic models are effective at uncovering latent themes in large text collections. However, due to their reliance on bag-of-words representations, they struggle to capture semantically abstract features. While some neural variants use richer representations, they are similarly constrained by expressing topics as word lists, which limits their ability to articulate complex topics. We introduce Mechanistic Topic Models (MTMs), a class of topic models that operate on interpretable features learned by sparse autoencoders (SAEs). By defining topics over this semantically rich space, MTMs can reveal deeper conceptual themes with expressive feature descriptions. Moreover, uniquely among topic models, MTMs enable controllable text generation using topic-based steering vectors. To properly evaluate MTM topics against word-list-based approaches, we propose extit{topic judge}, an LLM-based pairwise comparison evaluation framework. Across five datasets, MTMs match or exceed traditional and neural baselines on coherence metrics, are consistently preferred by topic judge, and enable effective steering of LLM outputs.