๐ค AI Summary
AOPC exhibits two critical limitations in cross-model fidelity evaluation: (1) sensitivity to model-specific perturbations, leading to distorted ranking; and (2) isolated scalar values lacking model- and task-aware upper/lower bounds, hindering interpretability. To address these, we propose Normalized AOPC (NAOPC), the first method to systematically expose and correct AOPCโs cross-model bias. NAOPC introduces a generalizable normalization framework that calibrates perturbation curves via model-agnostic baselines and dynamically estimates task- and architecture-adaptive fidelity bounds. This enables faithful, fair, and robust cross-model and cross-dataset fidelity assessment. Extensive experiments demonstrate that NAOPC substantially improves comparability and robustness over AOPC: it overturns multiple established conclusions by reconstructing prior rankings, and validates its effectiveness across diverse architectures (e.g., ViT, ResNet) and benchmarks (e.g., ImageNet, CIFAR-10).
๐ Abstract
Deep neural network predictions are notoriously difficult to interpret. Feature attribution methods aim to explain these predictions by identifying the contribution of each input feature. Faithfulness, often evaluated using the area over the perturbation curve (AOPC), reflects feature attributions' accuracy in describing the internal mechanisms of deep neural networks. However, many studies rely on AOPC to compare faithfulness across different models, which we show can lead to false conclusions about models' faithfulness. Specifically, we find that AOPC is sensitive to variations in the model, resulting in unreliable cross-model comparisons. Moreover, AOPC scores are difficult to interpret in isolation without knowing the model-specific lower and upper limits. To address these issues, we propose a normalization approach, Normalized AOPC (NAOPC), enabling consistent cross-model evaluations and more meaningful interpretation of individual scores. Our experiments demonstrate that this normalization can radically change AOPC results, questioning the conclusions of earlier studies and offering a more robust framework for assessing feature attribution faithfulness. Our code is available at https://github.com/JoakimEdin/naopc.