Rethinking Crowd-Sourced Evaluation of Neuron Explanations

๐Ÿ“… 2025-06-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Crowdsourced evaluation of neuron explanation quality suffers from high noise and substantial cost. Method: This paper introduces the first efficient crowdsourced evaluation framework that integrates importance sampling with Bayesian label-noise modeling. Unlike conventional approaches that rely solely on top-activation sample matching, our framework quantifies an explanationโ€™s coverage over the entire input space. Importance sampling focuses annotation effort on information-rich regions, while Bayesian rating aggregation robustly mitigates label noise. Contribution/Results: Experiments demonstrate that our method reduces evaluation cost by approximately 30ร— and annotation volume by 5ร— compared to standard crowdsourcing protocols. It enables large-scale comparative benchmarking of neuron explanations, systematically revealing performance boundaries and limitations across state-of-the-art visual explanation methods.

Technology Category

Application Category

๐Ÿ“ Abstract
Interpreting individual neurons or directions in activations space is an important component of mechanistic interpretability. As such, many algorithms have been proposed to automatically produce neuron explanations, but it is often not clear how reliable these explanations are, or which methods produce the best explanations. This can be measured via crowd-sourced evaluations, but they can often be noisy and expensive, leading to unreliable results. In this paper, we carefully analyze the evaluation pipeline and develop a cost-effective and highly accurate crowdsourced evaluation strategy. In contrast to previous human studies that only rate whether the explanation matches the most highly activating inputs, we estimate whether the explanation describes neuron activations across all inputs. To estimate this effectively, we introduce a novel application of importance sampling to determine which inputs are the most valuable to show to raters, leading to around 30x cost reduction compared to uniform sampling. We also analyze the label noise present in crowd-sourced evaluations and propose a Bayesian method to aggregate multiple ratings leading to a further ~5x reduction in number of ratings required for the same accuracy. Finally, we use these methods to conduct a large-scale study comparing the quality of neuron explanations produced by the most popular methods for two different vision models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reliability of automated neuron explanation methods
Reducing cost and noise in crowd-sourced explanation evaluations
Comparing quality of neuron explanations across popular vision models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Importance sampling for cost-effective evaluations
Bayesian method to reduce label noise
Large-scale comparison of neuron explanation methods
๐Ÿ”Ž Similar Papers
No similar papers found.