VCBench: A Controllable Benchmark for Symbolic and Abstract Challenges in Video Cognition

📅 2024-11-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video-language model (VLM) benchmarks lack controlled evaluation of higher-order cognitive capabilities—particularly symbolic representation and abstract reasoning. Method: We introduce VCBench, the first controllable video cognition benchmark: (1) a Python-based procedural engine generates dynamic videos embedding symbolic and abstract concepts with fine-grained content control and hierarchical difficulty scaling; (2) task-oriented, structured question templates cover multi-level cognitive tasks—including abstract reasoning and symbolic manipulation; and (3) a standardized evaluation protocol for LVLMs is established. Contribution/Results: Experiments reveal a 19% performance drop for state-of-the-art models (e.g., Qwen2-VL-72B) on abstract video tasks, exposing critical bottlenecks in symbolic and abstract understanding. VCBench fills a key gap in controllable assessment of advanced cognition, offering a reproducible, extensible evaluation infrastructure for video-language models.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Video-Language Models (LVLMs) have driven the development of benchmarks designed to assess cognitive abilities in video-based tasks. However, most existing benchmarks heavily rely on web-collected videos paired with human annotations or model-generated questions, which limit control over the video content and fall short in evaluating advanced cognitive abilities involving symbolic elements and abstract concepts. To address these limitations, we introduce VCBench, a controllable benchmark to assess LVLMs' cognitive abilities, involving symbolic and abstract concepts at varying difficulty levels. By generating video data with the Python-based engine, VCBench allows for precise control over the video content, creating dynamic, task-oriented videos that feature complex scenes and abstract concepts. Each task pairs with tailored question templates that target specific cognitive challenges, providing a rigorous evaluation test. Our evaluation reveals that even state-of-the-art (SOTA) models, such as Qwen2-VL-72B, struggle with simple video cognition tasks involving abstract concepts, with performance sharply dropping by 19% as video complexity rises. These findings reveal the current limitations of LVLMs in advanced cognitive tasks and highlight the critical role of VCBench in driving research toward more robust LVLMs for complex video cognition challenges.
Problem

Research questions and friction points this paper is trying to address.

Evaluating cognitive abilities in video-language models
Assessing symbolic and abstract perception in LVLMs
Controlling video content and difficulty for better diagnostics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic video generation via programmatic engine
Controllable benchmark with fine-grained video elements
Evaluates abstract, symbolic, and multimodal cognition
🔎 Similar Papers
No similar papers found.
C
Chenglin Li
Zhejiang University
Q
Qianglong Chen
Zhejiang University
Z
Zhi Li
Zhejiang University
Feng Tao
Feng Tao
Bosch USA
Reinforcement learningautonomous driving
Y
Yin Zhang
Zhejiang University