🤖 AI Summary
Zero-shot multi-label recognition (MLR) faces challenges including the absence of training data and the inability to fine-tune vision-language models (VLMs). This paper proposes a fully black-box approach that requires no ground-truth labels, model fine-tuning, or architectural modifications—relying solely on raw VLM output scores. Our key contributions are: (1) an empirical discovery that the VLM’s second-highest class score exhibits higher discriminability than the top score for multi-label inference; (2) construction of composite prompts grounded in object co-occurrence priors; and (3) an unsupervised score debiasing and adaptive fusion mechanism that explicitly models semantic ambiguities inherent in “AND/OR” logical relations among labels. Evaluated across multiple benchmarks, our method consistently outperforms training-dependent zero-shot baselines, achieving substantial gains in mean average precision (mAP) while improving both multi-label ranking quality and prediction robustness.
📝 Abstract
Zero-shot multi-label recognition (MLR) with Vision-Language Models (VLMs) faces significant challenges without training data, model tuning, or architectural modifications. Existing approaches require prompt tuning or architectural adaptations, limiting zero-shot applicability. Our work proposes a novel solution treating VLMs as black boxes, leveraging scores without training data or ground truth. Using large language model insights on object co-occurrence, we introduce compound prompts grounded in realistic object combinations. Analysis of these prompt scores reveals VLM biases and ``AND''/``OR'' signal ambiguities, notably that maximum compound scores are surprisingly suboptimal compared to second-highest scores. We address these through a debiasing and score-fusion algorithm that corrects image bias and clarifies VLM response behaviors. Our method enhances other zero-shot approaches, consistently improving their results. Experiments show superior mean Average Precision (mAP) compared to methods requiring training data, achieved through refined object ranking for robust zero-shot MLR.