ConceptViz: A Visual Analytics Approach for Exploring Concepts in Large Language Models

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Interpretability of knowledge representations in large language models (LLMs) is hindered by the semantic gap between sparse autoencoder (SAE) features and human-understandable concepts. To address this, we propose a three-stage analytical framework—*Identify → Explain → Validate*—implemented as an integrated system combining SAEs, visualization techniques, and an interactive interface. This system enables concept-driven feature alignment, dynamic mapping, and behavioral validation. Our key contribution lies in grounding abstract SAE features to interpretable semantic concepts and significantly reducing cognitive load for manual interpretation via interactive exploration. Two application case studies and a user study demonstrate that our approach substantially improves the efficiency of discovering and validating meaningful concepts, while enhancing researchers’ understanding of—and operational capacity over—LLM internal representation mechanisms.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved remarkable performance across a wide range of natural language tasks. Understanding how LLMs internally represent knowledge remains a significant challenge. Despite Sparse Autoencoders (SAEs) have emerged as a promising technique for extracting interpretable features from LLMs, SAE features do not inherently align with human-understandable concepts, making their interpretation cumbersome and labor-intensive. To bridge the gap between SAE features and human concepts, we present ConceptViz, a visual analytics system designed for exploring concepts in LLMs. ConceptViz implements a novel dentification => Interpretation => Validation pipeline, enabling users to query SAEs using concepts of interest, interactively explore concept-to-feature alignments, and validate the correspondences through model behavior verification. We demonstrate the effectiveness of ConceptViz through two usage scenarios and a user study. Our results show that ConceptViz enhances interpretability research by streamlining the discovery and validation of meaningful concept representations in LLMs, ultimately aiding researchers in building more accurate mental models of LLM features. Our code and user guide are publicly available at https://github.com/Happy-Hippo209/ConceptViz.
Problem

Research questions and friction points this paper is trying to address.

Bridging the gap between SAE features and human-understandable concepts in LLMs
Enabling interactive exploration of concept-to-feature alignments in language models
Streamlining discovery and validation of meaningful concept representations in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual analytics system for exploring LLM concepts
Pipeline for identifying, interpreting, validating features
Interactive alignment of human concepts to SAE features
🔎 Similar Papers
No similar papers found.
H
Haoxuan Li
State Key Lab of CAD&CG, Zhejiang University
Z
Zhen Wen
State Key Lab of CAD&CG, Zhejiang University
Qiqi Jiang
Qiqi Jiang
State Key Lab of CAD&CG, Zhejiang University
C
Chenxiao Li
Zhejiang University
Yuwei Wu
Yuwei Wu
Ph.D. candidate, GRASP Lab, University of Pennsylvania
RoboticsTrajectory OptimizationTask and Motion Planning
Y
Yuchen Yang
State Key Lab of CAD&CG, Zhejiang University
Yiyao Wang
Yiyao Wang
State Key Lab of CAD&CG, Zhejiang University
visualization
Xiuqi Huang
Xiuqi Huang
Zhejiang University
Data Management
Minfeng Zhu
Minfeng Zhu
Zhejiang University
VisualisationMath
W
Wei Chen
State Key Lab of CAD&CG, Zhejiang University