π€ AI Summary
Existing neuron interpretation methods focus on individual neurons, struggling with feature entanglement and fragmented explanations. To address this, we propose NeurFlowβa novel framework that shifts the interpretability unit from isolated neurons to functionally coupled neuron groups. NeurFlow introduces a pipeline comprising neuron functional similarity measurement, automatic core neuron identification, cross-layer clustering, and hierarchical interaction graph construction, yielding an interpretable, circuit-based hierarchical model. This approach breaks away from conventional single-neuron analysis paradigms and significantly improves explanation fidelity across multiple vision models (e.g., ResNet, ViT). Moreover, it enables downstream applications such as image debugging and automatic concept annotation, achieving both high interpretability and computational efficiency. Experimental results demonstrate superior faithfulness, stability, and scalability compared to state-of-the-art baselines, establishing NeurFlow as a principled framework for holistic, system-level neural network interpretation.
π Abstract
Understanding the inner workings of neural networks is essential for enhancing model performance and interpretability. Current research predominantly focuses on examining the connection between individual neurons and the model's final predictions. Which suffers from challenges in interpreting the internal workings of the model, particularly when neurons encode multiple unrelated features. In this paper, we propose a novel framework that transitions the focus from analyzing individual neurons to investigating groups of neurons, shifting the emphasis from neuron-output relationships to functional interaction between neurons. Our automated framework, NeurFlow, first identifies core neurons and clusters them into groups based on shared functional relationships, enabling a more coherent and interpretable view of the network's internal processes. This approach facilitates the construction of a hierarchical circuit representing neuron interactions across layers, thus improving interpretability while reducing computational costs. Our extensive empirical studies validate the fidelity of our proposed NeurFlow. Additionally, we showcase its utility in practical applications such as image debugging and automatic concept labeling, thereby highlighting its potential to advance the field of neural network explainability.