NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions

πŸ“… 2025-02-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing neuron interpretation methods focus on individual neurons, struggling with feature entanglement and fragmented explanations. To address this, we propose NeurFlowβ€”a novel framework that shifts the interpretability unit from isolated neurons to functionally coupled neuron groups. NeurFlow introduces a pipeline comprising neuron functional similarity measurement, automatic core neuron identification, cross-layer clustering, and hierarchical interaction graph construction, yielding an interpretable, circuit-based hierarchical model. This approach breaks away from conventional single-neuron analysis paradigms and significantly improves explanation fidelity across multiple vision models (e.g., ResNet, ViT). Moreover, it enables downstream applications such as image debugging and automatic concept annotation, achieving both high interpretability and computational efficiency. Experimental results demonstrate superior faithfulness, stability, and scalability compared to state-of-the-art baselines, establishing NeurFlow as a principled framework for holistic, system-level neural network interpretation.

Technology Category

Application Category

πŸ“ Abstract
Understanding the inner workings of neural networks is essential for enhancing model performance and interpretability. Current research predominantly focuses on examining the connection between individual neurons and the model's final predictions. Which suffers from challenges in interpreting the internal workings of the model, particularly when neurons encode multiple unrelated features. In this paper, we propose a novel framework that transitions the focus from analyzing individual neurons to investigating groups of neurons, shifting the emphasis from neuron-output relationships to functional interaction between neurons. Our automated framework, NeurFlow, first identifies core neurons and clusters them into groups based on shared functional relationships, enabling a more coherent and interpretable view of the network's internal processes. This approach facilitates the construction of a hierarchical circuit representing neuron interactions across layers, thus improving interpretability while reducing computational costs. Our extensive empirical studies validate the fidelity of our proposed NeurFlow. Additionally, we showcase its utility in practical applications such as image debugging and automatic concept labeling, thereby highlighting its potential to advance the field of neural network explainability.
Problem

Research questions and friction points this paper is trying to address.

Interpret neural network internal workings
Shift focus from individual to grouped neurons
Enhance model interpretability and performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Groups neurons by shared functions
Automates hierarchical circuit construction
Enhances interpretability and reduces costs
πŸ”Ž Similar Papers
No similar papers found.
T
Tue M. Cao
Institute for AI Innovation and Societal Impact (AI4LIFE), Hanoi University of Science and Technology, Hanoi, Vietnam
N
Nhat X. Hoang
University of Florida, Gainesville, Florida, USA
Hieu H. Pham
Hieu H. Pham
College of Engineering & Computer Science, VinUni-Illinois Smart Health Center, VinUniversity
AIComputer VisionDeep LearningMedical Image AnalysisComputational Bioimaging
P
Phi Le Nguyen
Institute for AI Innovation and Societal Impact (AI4LIFE), Hanoi University of Science and Technology, Hanoi, Vietnam
My T. Thai
My T. Thai
Professor, University of Florida, IEEE Fellow
Explainable AISecurity and PrivacyNetwork ScienceOptimization