Video Reality Test: Can AI-Generated ASMR Videos fool VLMs and Humans?

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the perceptual authenticity assessment of AI-generated videos under audiovisual coupling, particularly in strongly synchronized scenarios such as ASMR. To this end, we introduce the first ASMR-specific benchmark for evaluating audio-video strong-coupling authenticity. Our methodology features a creator-judge adversarial paradigm and an immersive audiovisual joint evaluation framework, enabling human expert–multimodal large model (e.g., Gemini 2.5-Pro) collaborative blind testing. Key findings reveal a critical cross-modal consistency bottleneck in vision-language models (VLMs) under audio guidance: for Veo3.1-Fast–generated videos, Gemini 2.5-Pro achieves only 56% discrimination accuracy—near chance level—while human experts attain 81.25%. Although audio cues significantly enhance discriminability, superficial artifacts (e.g., watermarks) severely mislead VLMs. This work establishes a novel benchmark, methodology, and empirical insight for authentic video assessment.

Technology Category

Application Category

📝 Abstract
Recent advances in video generation have produced vivid content that are often indistinguishable from real videos, making AI-generated video detection an emerging societal challenge. Prior AIGC detection benchmarks mostly evaluate video without audio, target broad narrative domains, and focus on classification solely. Yet it remains unclear whether state-of-the-art video generation models can produce immersive, audio-paired videos that reliably deceive humans and VLMs. To this end, we introduce Video Reality Test, an ASMR-sourced video benchmark suite for testing perceptual realism under tight audio-visual coupling, featuring the following dimensions: extbf{(i) Immersive ASMR video-audio sources.} Built on carefully curated real ASMR videos, the benchmark targets fine-grained action-object interactions with diversity across objects, actions, and backgrounds. extbf{(ii) Peer-Review evaluation.} An adversarial creator-reviewer protocol where video generation models act as creators aiming to fool reviewers, while VLMs serve as reviewers seeking to identify fakeness. Our experimental findings show: The best creator Veo3.1-Fast even fools most VLMs: the strongest reviewer (Gemini 2.5-Pro) achieves only 56% accuracy (random 50%), far below that of human experts (81.25%). Adding audio improves real-fake discrimination, yet superficial cues such as watermarks can still significantly mislead models. These findings delineate the current boundary of video generation realism and expose limitations of VLMs in perceptual fidelity and audio-visual consistency. Our code is available at https://github.com/video-reality-test/video-reality-test.
Problem

Research questions and friction points this paper is trying to address.

Evaluates AI-generated ASMR video realism with audio
Tests if videos deceive humans and vision-language models
Measures perceptual fidelity and audio-visual consistency limits
Innovation

Methods, ideas, or system contributions that make the work stand out.

ASMR video benchmark tests audio-visual coupling realism
Adversarial creator-reviewer protocol evaluates VLM deception
Audio improves detection but superficial cues mislead models
🔎 Similar Papers
No similar papers found.