๐ค AI Summary
To address the combinatorial explosion hindering critical region identification in black-box model attribution for discrete inputs (e.g., images), this paper formulates critical region attribution as a minimum explainable subset selection problem under submodular functionsโthe first such formulation. We propose a bidirectional greedy optimization algorithm that simultaneously identifies the most and least important input regions while preserving theoretical approximation guarantees. Integrated with confidence-driven misattribution analysis and perturbation-based evaluation (Insertion/Deletion), our method achieves consistent improvements across eight foundational models: average Insertion and Deletion scores improve by 36.3% and 39.6%, respectively; attribution efficiency increases 1.6ร over standard greedy search; and the maximum confidence of detected model misattributions rises by 86.1% on average.
๐ Abstract
To develop a trustworthy AI system, which aim to identify the input regions that most influence the models decisions. The primary task of existing attribution methods lies in efficiently and accurately identifying the relationships among input-prediction interactions. Particularly when the input data is discrete, such as images, analyzing the relationship between inputs and outputs poses a significant challenge due to the combinatorial explosion. In this paper, we propose a novel and efficient black-box attribution mechanism, LiMA (Less input is More faithful for Attribution), which reformulates the attribution of important regions as an optimization problem for submodular subset selection. First, to accurately assess interactions, we design a submodular function that quantifies subset importance and effectively captures their impact on decision outcomes. Then, efficiently ranking input sub-regions by their importance for attribution, we improve optimization efficiency through a novel bidirectional greedy search algorithm. LiMA identifies both the most and least important samples while ensuring an optimal attribution boundary that minimizes errors. Extensive experiments on eight foundation models demonstrate that our method provides faithful interpretations with fewer regions and exhibits strong generalization, shows an average improvement of 36.3% in Insertion and 39.6% in Deletion. Our method also outperforms the naive greedy search in attribution efficiency, being 1.6 times faster. Furthermore, when explaining the reasons behind model prediction errors, the average highest confidence achieved by our method is, on average, 86.1% higher than that of state-of-the-art attribution algorithms. The code is available at https://github.com/RuoyuChen10/LIMA.