🤖 AI Summary
This work systematically evaluates the cross-task performance consistency of prominent vision-language models (VLMs)—including CLIP, BLIP, and LXMERT—across image retrieval, caption generation, and visual reasoning. It exposes an inherent trade-off between generalization capability and task-specific specialization. To quantify model stability, we propose a novel metric: Cross-Dataset Consistency (CDC), integrated with multi-dimensional evaluation of accuracy, generation quality, and reasoning efficiency. Experimental results show that CLIP achieves the highest generalization (CDC = 0.92), BLIP excels in accuracy on specific tasks, and LXMERT attains superior performance in structured visual reasoning. Crucially, this study introduces consistency modeling into VLM evaluation for the first time, establishing a quantifiable framework for balancing generality versus specialization—thereby providing principled, actionable guidance for model selection and architecture design in industrial applications.
📝 Abstract
Vision-Language Models (VLMs) are advancing multimodal AI, yet their performance consistency across tasks is underexamined. We benchmark CLIP, BLIP, and LXMERT across diverse datasets spanning retrieval, captioning, and reasoning. Our evaluation includes task accuracy, generation quality, efficiency, and a novel Cross-Dataset Consistency (CDC) metric. CLIP shows strongest generalization (CDC: 0.92), BLIP excels on curated data, and LXMERT leads in structured reasoning. These results expose trade-offs between generalization and specialization, informing industrial deployment of VLMs and guiding development toward robust, task-flexible architectures.