🤖 AI Summary
Visual activity recognition evaluation suffers from verb semantic ambiguity (e.g., synonymous verbs “brushing teeth” and “combing hair”) and image interpretation polysemy (e.g., “driving” vs. “operating”), causing conventional exact-match metrics—which rely on a single ground-truth label—to underestimate model robustness. To address this, we propose a novel evaluation framework based on vision-language clustering: leveraging the imSitu dataset, we construct verb sense clusters wherein semantically similar action descriptions are grouped, enabling multiple valid predictions to match a single image. Experiments show each image associates with an average of 2.8 sense clusters, confirming cluster validity. Our method significantly improves alignment with human judgments over exact matching and demonstrates superior discriminative power across diverse models. The core contribution is the first systematic integration of verb sense clustering into visual activity evaluation, yielding a more cognitively grounded and robust assessment paradigm.
📝 Abstract
Evaluating visual activity recognition systems is challenging due to inherent ambiguities in verb semantics and image interpretation. When describing actions in images, synonymous verbs can refer to the same event (e.g., brushing vs. grooming), while different perspectives can lead to equally valid but distinct verb choices (e.g., piloting vs. operating). Standard exact-match evaluation, which relies on a single gold answer, fails to capture these ambiguities, resulting in an incomplete assessment of model performance. To address this, we propose a vision-language clustering framework that constructs verb sense clusters, providing a more robust evaluation. Our analysis of the imSitu dataset shows that each image maps to an average of 2.8 sense clusters, with each cluster representing a distinct perspective of the image. We evaluate multiple activity recognition models and compare our cluster-based evaluation with standard evaluation methods. Additionally, our human alignment analysis suggests that the cluster-based evaluation better aligns with human judgements, offering a more nuanced assessment of model performance.