MESH -- Understanding Videos Like Human: Measuring Hallucinations in Large Video Models

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large video models (LVMs) suffer from hallucination in dynamic content understanding, yet existing benchmarks predominantly rely on manual annotation and neglect human bottom-up visual perception mechanisms. To address this, we propose MESH—a novel fine-grained evaluation benchmark grounded in human visual perception. MESH introduces target-trap question design to assess hierarchical temporal reasoning, spanning object recognition, attribute discrimination, and multi-agent action alignment. It integrates binary and multiple-choice formats with perceptually motivated distractors to quantify hallucination propensity across abstraction levels. Experiments reveal that state-of-the-art LVMs exhibit robust performance on basic recognition but suffer pronounced hallucination in fine-grained feature interpretation and long-video action alignment. MESH establishes a new paradigm for video hallucination assessment—interpretable, hierarchically structured, and perceptually aligned—enabling systematic diagnosis of model failures along human-centered cognitive dimensions.

Technology Category

Application Category

📝 Abstract
Large Video Models (LVMs) build on the semantic capabilities of Large Language Models (LLMs) and vision modules by integrating temporal information to better understand dynamic video content. Despite their progress, LVMs are prone to hallucinations-producing inaccurate or irrelevant descriptions. Current benchmarks for video hallucination depend heavily on manual categorization of video content, neglecting the perception-based processes through which humans naturally interpret videos. We introduce MESH, a benchmark designed to evaluate hallucinations in LVMs systematically. MESH uses a Question-Answering framework with binary and multi-choice formats incorporating target and trap instances. It follows a bottom-up approach, evaluating basic objects, coarse-to-fine subject features, and subject-action pairs, aligning with human video understanding. We demonstrate that MESH offers an effective and comprehensive approach for identifying hallucinations in videos. Our evaluations show that while LVMs excel at recognizing basic objects and features, their susceptibility to hallucinations increases markedly when handling fine details or aligning multiple actions involving various subjects in longer videos.
Problem

Research questions and friction points this paper is trying to address.

Measuring hallucinations in Large Video Models producing inaccurate descriptions
Evaluating LVMs' susceptibility to errors in fine details and actions
Addressing lack of perception-based benchmarks for video understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Question-Answering framework with binary and multi-choice formats
Bottom-up approach evaluating objects, features, and actions
Target and trap instances to systematically identify hallucinations