A Case Study on Concept Induction for Neuron-Level Interpretability in CNN

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of interpreting the semantics of hidden neurons in convolutional neural networks (CNNs), which hinders model explainability and trustworthiness. The authors adapt the concept attribution framework—originally developed for object-centric datasets—to the large-scale SUN2012 scene recognition dataset for the first time. By analyzing neuron activation patterns, they automatically assign human-interpretable semantic labels to hidden neurons and validate these attributions through network visualization and statistical significance testing. The experiments successfully associate multiple neurons with clear semantic concepts, demonstrating not only the effectiveness of the approach on complex, real-world scenes but also providing the first evidence of the concept attribution framework’s generalizability and applicability across diverse datasets.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks (DNNs) have advanced applications in domains such as healthcare, autonomous systems, and scene understanding, yet the internal semantics of their hidden neurons remain poorly understood. Prior work introduced a Concept Induction-based framework for hidden neuron analysis and demonstrated its effectiveness on the ADE20K dataset. In this case study, we investigate whether the approach generalizes by applying it to the SUN2012 dataset, a large-scale scene recognition benchmark. Using the same workflow, we assign interpretable semantic labels to neurons and validate them through web-sourced images and statistical testing. Our findings confirm that the method transfers to SUN2012, showing its broader applicability.
Problem

Research questions and friction points this paper is trying to address.

neuron interpretability
concept induction
CNN
semantic understanding
deep neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept Induction
Neuron Interpretability
CNN
Generalization
Semantic Labeling
🔎 Similar Papers
No similar papers found.