🤖 AI Summary
This study presents the first systematic evaluation of the zero-shot generalization capabilities of general-purpose vision-language models (VLMs) in isolated sign language recognition (ISLR) within the era of large language models. Leveraging the WLASL300 benchmark, the authors compare multiple open-source and closed-source VLMs through zero-shot inference, prompt engineering, and multimodal alignment analysis. The results demonstrate that open-source VLMs significantly underperform compared to task-specific supervised models, whereas large closed-source models exhibit strong performance, confirming their partial visual-semantic alignment for sign language understanding. The findings highlight the critical roles of model scale and training data diversity in enabling effective sign language interpretation and offer new insights into the applicability of general-purpose multimodal models to low-resource visual tasks.
📝 Abstract
Recent Vision Language Models (VLMs) have demonstrated strong performance across a wide range of multimodal reasoning tasks. This raises the question of whether such general-purpose models can also address specialized visual recognition problems such as isolated sign language recognition (ISLR) without task-specific training. In this work, we investigate the capability of modern VLMs to perform ISLR in a zero-shot setting. We evaluate several open-source and proprietary VLMs on the WLASL300 benchmark. Our experiments show that, under prompt-only zero-shot inference, current open-source VLMs remain far behind classic supervised ISLR classifiers by a wide margin. However, follow-up experiments reveal that these models capture partial visual-semantic alignment between signs and text descriptions. Larger proprietary models achieve substantially higher accuracy, highlighting the importance of model scale and training data diversity. All our code is publicly available on GitHub.