🤖 AI Summary
Human object recognition relies on selective visual information processing, yet the underlying strategies remain difficult to measure directly. To address this, we propose MAPS—a framework that transforms neural network attribution maps into interpretable saliency masks and systematically evaluates their alignment with human visual strategies via three criteria: pixel budget control, behavioral accuracy comparison, and cross-model attribution similarity. MAPS establishes the first behaviorally verifiable attribution-alignment paradigm, requiring only minimal human or primate behavioral data for efficient matching—substantially reducing the cost of traditional psychophysical experiments. Validated on both synthetic and real primate behavioral datasets, MAPS accurately identifies attribution methods most consistent with biological vision mechanisms, achieving performance comparable to Bubble masking while drastically reducing the number of required trials.
📝 Abstract
Human core object recognition depends on the selective use of visual information, but the strategies guiding these choices are difficult to measure directly. We present MAPS (Masked Attribution-based Probing of Strategies), a behaviorally validated computational tool that tests whether explanations derived from artificial neural networks (ANNs) can also explain human vision. MAPS converts attribution maps into explanation-masked images (EMIs) and compares image-by-image human accuracies on these minimal images with limited pixel budgets with accuracies on the full stimuli. MAPS provides a principled way to evaluate and choose among competing ANN interpretability methods. In silico, EMI-based behavioral similarity between models reliably recovers the ground-truth similarity computed from their attribution maps, establishing which explanation methods best capture the model's strategy. When applied to humans and macaques, MAPS identifies ANN-explanation combinations whose explanations align most closely with biological vision, achieving the behavioral validity of Bubble masks while requiring far fewer behavioral trials. Because it needs only access to model attributions and a modest set of behavioral data on the original images, MAPS avoids exhaustive psychophysics while offering a scalable tool for adjudicating explanations and linking human behavior, neural activity, and model decisions under a common standard.