π€ AI Summary
To address the challenge of interpreting the semantic meaning of hidden-layer neurons in deep vision networks, this paper proposes Describe-and-Dissect (DnD), a training-agnostic, annotation-free, and concept-agnostic neuron semantic interpretation framework. DnD leverages multimodal large language models and zero-shot prompting to perform imageβtext alignment analysis on neuron activations, generating high-quality, open-vocabulary natural language explanations. Its core contribution is the first realization of neuron semantic decoding fully decoupled from model training. Experiments demonstrate that DnD achieves significantly higher label quality than existing methods: human evaluators select its explanations as optimal 2.1Γ more frequently than baseline approaches. Furthermore, DnD successfully supports interpretability diagnostics for a land-cover prediction model, validating its generalizability and practical utility in real-world applications.
π Abstract
In this paper, we propose Describe-and-Dissect (DnD), a novel method to describe the roles of hidden neurons in vision networks. DnD utilizes recent advancements in multimodal deep learning to produce complex natural language descriptions, without the need for labeled training data or a predefined set of concepts to choose from. Additionally, DnD is training-free, meaning we don't train any new models and can easily leverage more capable general purpose models in the future. We have conducted extensive qualitative and quantitative analysis to show that DnD outperforms prior work by providing higher quality neuron descriptions. Specifically, our method on average provides the highest quality labels and is more than 2$ imes$ as likely to be selected as the best explanation for a neuron than the best baseline. Finally, we present a use case providing critical insights into land cover prediction models for sustainability applications. Our code and data are available at https://github.com/Trustworthy-ML-Lab/Describe-and-Dissect.