🤖 AI Summary
Existing explainable object recognition methods rely on vision-language models (e.g., CLIP) but suffer from weak prompt-conditioned explanation structuring due to limitations in text encoders; moreover, mainstream benchmarks provide only a single (often noisy) natural-language explanation per image, failing to capture the inherent multimodal ambiguity of visual features.
Method: We propose a novel multi-reason explainable recognition paradigm: (i) constructing the first dataset with multiple human-annotated, semantically diverse reasoning traces per image, along with dedicated evaluation metrics; (ii) designing a training-free contrastive conditional inference framework that explicitly models the joint probability distribution over image–category–reason triples; and (iii) leveraging prompt engineering and multi-reason consensus optimization for zero-shot classification and explanation generation.
Contribution/Results: Our approach achieves state-of-the-art performance on the multi-reason benchmark, significantly improving explanation diversity, faithfulness, and cross-domain generalization while simultaneously boosting zero-shot classification accuracy.
📝 Abstract
Explainable object recognition using vision-language models such as CLIP involves predicting accurate category labels supported by rationales that justify the decision-making process. Existing methods typically rely on prompt-based conditioning, which suffers from limitations in CLIP's text encoder and provides weak conditioning on explanatory structures. Additionally, prior datasets are often restricted to single, and frequently noisy, rationales that fail to capture the full diversity of discriminative image features. In this work, we introduce a multi-rationale explainable object recognition benchmark comprising datasets in which each image is annotated with multiple ground-truth rationales, along with evaluation metrics designed to offer a more comprehensive representation of the task. To overcome the limitations of previous approaches, we propose a contrastive conditional inference (CCI) framework that explicitly models the probabilistic relationships among image embeddings, category labels, and rationales. Without requiring any training, our framework enables more effective conditioning on rationales to predict accurate object categories. Our approach achieves state-of-the-art results on the multi-rationale explainable object recognition benchmark, including strong zero-shot performance, and sets a new standard for both classification accuracy and rationale quality. Together with the benchmark, this work provides a more complete framework for evaluating future models in explainable object recognition. The code will be made available online.