🤖 AI Summary
While current multimodal large language models (MLLMs) achieve strong performance on static-image OCR, their accuracy drops significantly on video OCR due to motion blur, temporal dynamics, and visual interference. Method: We introduce MME-VideoOCR—the first comprehensive, video-specific OCR benchmark—covering 44 real-world scenarios, 25 tasks, and 2,000 human-annotated question-answer pairs. It systematically evaluates spatiotemporal reasoning, cross-frame information integration, and robustness to linguistic prior bias. We further propose a fine-grained failure attribution framework to diagnose the fundamental gap between frame-wise reliance and holistic video understanding. Contribution/Results: Zero-shot evaluation across 18 state-of-the-art models reveals that Gemini-2.5 Pro achieves only 73.7% accuracy. Empirical analysis identifies high-resolution input and sufficient temporal sampling as the most critical factors for improvement. This work establishes a new standard and empirical foundation for evaluating and advancing video OCR capabilities.
📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved considerable accuracy in Optical Character Recognition (OCR) from static images. However, their efficacy in video OCR is significantly diminished due to factors such as motion blur, temporal variations, and visual effects inherent in video content. To provide clearer guidance for training practical MLLMs, we introduce the MME-VideoOCR benchmark, which encompasses a comprehensive range of video OCR application scenarios. MME-VideoOCR features 10 task categories comprising 25 individual tasks and spans 44 diverse scenarios. These tasks extend beyond text recognition to incorporate deeper comprehension and reasoning of textual content within videos. The benchmark consists of 1,464 videos with varying resolutions, aspect ratios, and durations, along with 2,000 meticulously curated, manually annotated question-answer pairs. We evaluate 18 state-of-the-art MLLMs on MME-VideoOCR, revealing that even the best-performing model (Gemini-2.5 Pro) achieves an accuracy of only 73.7%. Fine-grained analysis indicates that while existing MLLMs demonstrate strong performance on tasks where relevant texts are contained within a single or few frames, they exhibit limited capability in effectively handling tasks that demand holistic video comprehension. These limitations are especially evident in scenarios that require spatio-temporal reasoning, cross-frame information integration, or resistance to language prior bias. Our findings also highlight the importance of high-resolution visual input and sufficient temporal coverage for reliable OCR in dynamic video scenarios.