🤖 AI Summary
Existing visual-language explanation methods model only first-order feature attributions, failing to capture complex higher-order interactions between image and text modalities. To address this limitation, we propose the first cross-modal attribution framework based on the weighted Banzhaf interaction index, enabling unified quantification of second- and higher-order feature interactions within joint image-text representations. Our method pioneers the application of the weighted Banzhaf index to interpret vision-language models—including CLIP, SigLIP, and ViT—and integrates refined evaluation metrics such as directional games to support fine-grained, verifiable interaction analysis. Experiments on MS COCO and ImageNet-1k demonstrate that our approach significantly outperforms state-of-the-art first-order attribution methods. Moreover, it effectively uncovers distinct higher-order interaction patterns across different models, thereby enhancing both the interpretability and trustworthiness of cross-modal systems.
📝 Abstract
Language-image pre-training (LIP) enables the development of vision-language models capable of zero-shot classification, localization, multimodal retrieval, and semantic understanding. Various explanation methods have been proposed to visualize the importance of input image-text pairs on the model's similarity outputs. However, popular saliency maps are limited by capturing only first-order attributions, overlooking the complex cross-modal interactions intrinsic to such encoders. We introduce faithful interaction explanations of LIP models (FIxLIP) as a unified approach to decomposing the similarity in vision-language encoders. FIxLIP is rooted in game theory, where we analyze how using the weighted Banzhaf interaction index offers greater flexibility and improves computational efficiency over the Shapley interaction quantification framework. From a practical perspective, we propose how to naturally extend explanation evaluation metrics, like the pointing game and area between the insertion/deletion curves, to second-order interaction explanations. Experiments on MS COCO and ImageNet-1k benchmarks validate that second-order methods like FIxLIP outperform first-order attribution methods. Beyond delivering high-quality explanations, we demonstrate the utility of FIxLIP in comparing different models like CLIP vs. SigLIP-2 and ViT-B/32 vs. ViT-L/16.