🤖 AI Summary
Interpretability of knowledge representations in large language models (LLMs) is hindered by the semantic gap between sparse autoencoder (SAE) features and human-understandable concepts. To address this, we propose a three-stage analytical framework—*Identify → Explain → Validate*—implemented as an integrated system combining SAEs, visualization techniques, and an interactive interface. This system enables concept-driven feature alignment, dynamic mapping, and behavioral validation. Our key contribution lies in grounding abstract SAE features to interpretable semantic concepts and significantly reducing cognitive load for manual interpretation via interactive exploration. Two application case studies and a user study demonstrate that our approach substantially improves the efficiency of discovering and validating meaningful concepts, while enhancing researchers’ understanding of—and operational capacity over—LLM internal representation mechanisms.
📝 Abstract
Large language models (LLMs) have achieved remarkable performance across a wide range of natural language tasks. Understanding how LLMs internally represent knowledge remains a significant challenge. Despite Sparse Autoencoders (SAEs) have emerged as a promising technique for extracting interpretable features from LLMs, SAE features do not inherently align with human-understandable concepts, making their interpretation cumbersome and labor-intensive. To bridge the gap between SAE features and human concepts, we present ConceptViz, a visual analytics system designed for exploring concepts in LLMs. ConceptViz implements a novel dentification => Interpretation => Validation pipeline, enabling users to query SAEs using concepts of interest, interactively explore concept-to-feature alignments, and validate the correspondences through model behavior verification. We demonstrate the effectiveness of ConceptViz through two usage scenarios and a user study. Our results show that ConceptViz enhances interpretability research by streamlining the discovery and validation of meaningful concept representations in LLMs, ultimately aiding researchers in building more accurate mental models of LLM features. Our code and user guide are publicly available at https://github.com/Happy-Hippo209/ConceptViz.