๐ค AI Summary
Crowdsourced evaluation of neuron explanation quality suffers from high noise and substantial cost. Method: This paper introduces the first efficient crowdsourced evaluation framework that integrates importance sampling with Bayesian label-noise modeling. Unlike conventional approaches that rely solely on top-activation sample matching, our framework quantifies an explanationโs coverage over the entire input space. Importance sampling focuses annotation effort on information-rich regions, while Bayesian rating aggregation robustly mitigates label noise. Contribution/Results: Experiments demonstrate that our method reduces evaluation cost by approximately 30ร and annotation volume by 5ร compared to standard crowdsourcing protocols. It enables large-scale comparative benchmarking of neuron explanations, systematically revealing performance boundaries and limitations across state-of-the-art visual explanation methods.
๐ Abstract
Interpreting individual neurons or directions in activations space is an important component of mechanistic interpretability. As such, many algorithms have been proposed to automatically produce neuron explanations, but it is often not clear how reliable these explanations are, or which methods produce the best explanations. This can be measured via crowd-sourced evaluations, but they can often be noisy and expensive, leading to unreliable results. In this paper, we carefully analyze the evaluation pipeline and develop a cost-effective and highly accurate crowdsourced evaluation strategy. In contrast to previous human studies that only rate whether the explanation matches the most highly activating inputs, we estimate whether the explanation describes neuron activations across all inputs. To estimate this effectively, we introduce a novel application of importance sampling to determine which inputs are the most valuable to show to raters, leading to around 30x cost reduction compared to uniform sampling. We also analyze the label noise present in crowd-sourced evaluations and propose a Bayesian method to aggregate multiple ratings leading to a further ~5x reduction in number of ratings required for the same accuracy. Finally, we use these methods to conduct a large-scale study comparing the quality of neuron explanations produced by the most popular methods for two different vision models.