🤖 AI Summary
Large video models (LVMs) suffer from hallucination in dynamic content understanding, yet existing benchmarks predominantly rely on manual annotation and neglect human bottom-up visual perception mechanisms. To address this, we propose MESH—a novel fine-grained evaluation benchmark grounded in human visual perception. MESH introduces target-trap question design to assess hierarchical temporal reasoning, spanning object recognition, attribute discrimination, and multi-agent action alignment. It integrates binary and multiple-choice formats with perceptually motivated distractors to quantify hallucination propensity across abstraction levels. Experiments reveal that state-of-the-art LVMs exhibit robust performance on basic recognition but suffer pronounced hallucination in fine-grained feature interpretation and long-video action alignment. MESH establishes a new paradigm for video hallucination assessment—interpretable, hierarchically structured, and perceptually aligned—enabling systematic diagnosis of model failures along human-centered cognitive dimensions.
📝 Abstract
Large Video Models (LVMs) build on the semantic capabilities of Large Language Models (LLMs) and vision modules by integrating temporal information to better understand dynamic video content. Despite their progress, LVMs are prone to hallucinations-producing inaccurate or irrelevant descriptions. Current benchmarks for video hallucination depend heavily on manual categorization of video content, neglecting the perception-based processes through which humans naturally interpret videos. We introduce MESH, a benchmark designed to evaluate hallucinations in LVMs systematically. MESH uses a Question-Answering framework with binary and multi-choice formats incorporating target and trap instances. It follows a bottom-up approach, evaluating basic objects, coarse-to-fine subject features, and subject-action pairs, aligning with human video understanding. We demonstrate that MESH offers an effective and comprehensive approach for identifying hallucinations in videos. Our evaluations show that while LVMs excel at recognizing basic objects and features, their susceptibility to hallucinations increases markedly when handling fine details or aligning multiple actions involving various subjects in longer videos.