VELOCITI: Benchmarking Video-Language Compositional Reasoning with Strict Entailment

📅 2024-06-16
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Current vision-language models exhibit severe limitations in cross-temporal human–action compositional reasoning for short videos. Method: We introduce StrictVLE, a novel video-language entailment paradigm grounded in fine-grained binary classification, explicitly decoupling evaluation of subject identification, action understanding, and multi-event relational reasoning. Our approach employs intra-video entity-perturbed positive/negative caption pairs and multi-frame visual inputs. Contribution/Results: We establish the first benchmark dedicated to human–action cross-frame association. Compared to classic VLE and multiple-choice (MC) benchmarks, StrictVLE more precisely exposes model deficiencies: LLaVA-OneVision and Gemini-1.5-Pro achieve only 44.5% and 49.3% accuracy—far below human performance (93.0%). Results confirm that action-association reasoning lags behind subject identification, and video-perception–based negative examples pose the greatest challenge.

Technology Category

Application Category

📝 Abstract
A fundamental aspect of compositional reasoning in a video is associating people and their actions across time. Recent years have seen great progress in general-purpose vision or video models and a move towards long-video understanding. While exciting, we take a step back and ask: are current models good at compositional reasoning on short videos? To this end, we introduce VELOCITI, a benchmark to study Video-LLMs by disentangling and assessing the comprehension of agents, actions, and their associations across multiple events. We adopt the Video-Language Entailment setup and propose StrictVLE that requires correct classification (rather than ranking) of the positive and negative caption. We evaluate several models and observe that even the best, LLaVA-OneVision (44.5%) and Gemini-1.5-Pro (49.3%), are far from human accuracy at 93.0%. Results show that action understanding lags behind agents, and negative captions created using entities appearing in the video perform worse than those obtained from pure text manipulation. We also present challenges with ClassicVLE and multiple-choice (MC) evaluation, strengthening our preference for StrictVLE. Finally, we validate that our benchmark requires visual inputs of multiple frames making it ideal to study video-language compositional reasoning.
Problem

Research questions and friction points this paper is trying to address.

Assessing video-language models' compositional reasoning on short videos
Evaluating comprehension of agents, actions, and their temporal associations
Benchmarking strict entailment classification accuracy versus human performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces VELOCITI benchmark for Video-LLMs
Proposes StrictVLE for correct caption classification
Requires multi-frame visual inputs for reasoning
🔎 Similar Papers
No similar papers found.