A Multimodal Automated Interpretability Agent

📅 2024-04-22
🏛️ International Conference on Machine Learning
📈 Citations: 10
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses three core challenges in neural network interpretability: difficulty in neuron-level feature interpretation, sensitivity to spurious correlations, and heavy reliance on manual effort for failure mode discovery. To this end, we propose MAIA—the first multimodal, automated interpretability agent framework grounded in vision-language models (VLMs). MAIA integrates composable tools—including input synthesis and editing, maximum-activation sample retrieval, and automated experimental summarization and description generation—enabling iterative, modular subcomponent experimentation. On synthetic neuron benchmarks, MAIA achieves human-expert-level performance in neuron-level feature description for the first time. Across multiple computer vision models, it significantly improves feature explanation fidelity, reduces susceptibility to spurious features, and automatically identifies high-confidence misclassified samples.

Technology Category

Application Category

📝 Abstract
This paper describes MAIA, a Multimodal Automated Interpretability Agent. MAIA is a system that uses neural models to automate neural model understanding tasks like feature interpretation and failure mode discovery. It equips a pre-trained vision-language model with a set of tools that support iterative experimentation on subcomponents of other models to explain their behavior. These include tools commonly used by human interpretability researchers: for synthesizing and editing inputs, computing maximally activating exemplars from real-world datasets, and summarizing and describing experimental results. Interpretability experiments proposed by MAIA compose these tools to describe and explain system behavior. We evaluate applications of MAIA to computer vision models. We first characterize MAIA's ability to describe (neuron-level) features in learned representations of images. Across several trained models and a novel dataset of synthetic vision neurons with paired ground-truth descriptions, MAIA produces descriptions comparable to those generated by expert human experimenters. We then show that MAIA can aid in two additional interpretability tasks: reducing sensitivity to spurious features, and automatically identifying inputs likely to be mis-classified.
Problem

Research questions and friction points this paper is trying to address.

Automates neural model understanding tasks
Explains system behavior via tools
Aids in reducing sensitivity to spurious features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural models automate interpretability tasks
Pre-trained vision-language model with tools
Synthesizes inputs and computes activating exemplars