🤖 AI Summary
This paper addresses the problem of evaluating the quality of textual descriptors—such as class names or descriptive phrases—in vision tasks. Existing methods rely excessively on classification accuracy and fail to characterize the intrinsic representational capacity of descriptors. To overcome this limitation, we propose a dual-dimensional evaluation framework: (1) a representation dimension, quantifying descriptor quality via two novel alignment metrics—global alignment and CLIP similarity—in the vision-language embedding space; and (2) a semantic compatibility dimension, measuring alignment with pretraining corpora of foundation models. We systematically benchmark mainstream descriptor generation strategies—including zero-shot LLM generation and iterative optimization—across VLMs such as CLIP. Experimental results demonstrate that our metrics effectively discriminate descriptor quality, uncover interactions between generation strategies and model architectures, and provide both theoretical foundations and practical tools for interpretable, scalable visual descriptor design.
📝 Abstract
Text-based visual descriptors-ranging from simple class names to more descriptive phrases-are widely used in visual concept discovery and image classification with vision-language models (VLMs). Their effectiveness, however, depends on a complex interplay of factors, including semantic clarity, presence in the VLM's pre-training data, and how well the descriptors serve as a meaningful representation space. In this work, we systematically analyze descriptor quality along two key dimensions: (1) representational capacity, and (2) relationship with VLM pre-training data. We evaluate a spectrum of descriptor generation methods, from zero-shot LLM-generated prompts to iteratively refined descriptors. Motivated by ideas from representation alignment and language understanding, we introduce two alignment-based metrics-Global Alignment and CLIP Similarity-that move beyond accuracy. These metrics allow us to shed light on how different descriptor generation strategies interact with foundation model properties, offering insights into ways of studying descriptor effectiveness beyond accuracy evaluations.