π€ AI Summary
This work addresses the limitations of existing video understanding benchmarks, which are predominantly centered on Western-centric data and English, thereby failing to evaluate model performance in multicultural and multilingual settings. To bridge this gap, we introduce CURVEβthe first human-annotated benchmark for multicultural long-form video reasoning, constructed across 18 global regions in their native languages. CURVE features complex question-answering tasks and multi-step reasoning annotations that emphasize deep comprehension of visual cultural context. By constructing reasoning-trajectory-driven evidence graphs and employing an iterative analysis strategy, we demonstrate that current video large language models significantly underperform humans on CURVE, primarily due to insufficient perception of culturally specific visual elements. This highlights both the challenge and necessity of our benchmark for advancing culturally aware video understanding.
π Abstract
Recent advancements in video models have shown tremendous progress, particularly in long video understanding. However, current benchmarks predominantly feature western-centric data and English as the dominant language, introducing significant biases in evaluation. To address this, we introduce CURVE (Cultural Understanding and Reasoning in Video Evaluation), a challenging benchmark for multicultural and multilingual video reasoning. CURVE comprises high-quality, entirely human-generated annotations from diverse, region-specific cultural videos across 18 global locales. Unlike prior work that relies on automatic translations, CURVE provides complex questions, answers, and multi-step reasoning steps, all crafted in native languages. Making progress on CURVE requires a deeply situated understanding of visual cultural context. Furthermore, we leverage CURVE's reasoning traces to construct evidence-based graphs and propose a novel iterative strategy using these graphs to identify fine-grained errors in reasoning. Our evaluations reveal that SoTA Video-LLMs struggle significantly, performing substantially below human-level accuracy, with errors primarily stemming from the visual perception of cultural elements. CURVE will be publicly available under https://github.com/google-deepmind/neptune?tab=readme-ov-file\#minerva-cultural