Perception Test 2025: Challenge Summary and a Unified VQA Extension

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalizability of existing video perception models, which often rely on task-specific pipelines. To overcome this limitation, the authors propose a unified formulation that recasts diverse perception tasks—including video question answering, object/point tracking, and action/sound localization—as a multiple-choice video question answering problem. They introduce the first multimodal benchmark featuring a universal interface, built upon a unified video-language architecture that integrates capabilities for video understanding, temporal localization, visual grounding, and long-form video processing, thereby eliminating the need for customized solutions per task. Experimental results reveal significant performance bottlenecks of current state-of-the-art models under this unified framework, offering both a new direction and a standardized evaluation protocol for advancing general-purpose multimodal perception systems.

Technology Category

Application Category

📝 Abstract
The Third Perception Test challenge was organised as a full-day workshop alongside the IEEE/CVF International Conference on Computer Vision (ICCV) 2025. Its primary goal is to benchmark state-of-the-art video models and measure the progress in multimodal perception. This year, the workshop featured 2 guest tracks as well: KiVA (an image understanding challenge) and Physic-IQ (a video generation challenge). In this report, we summarise the results from the main Perception Test challenge, detailing both the existing tasks as well as novel additions to the benchmark. In this iteration, we placed an emphasis on task unification, as this poses a more challenging test for current SOTA multimodal models. The challenge included five consolidated tracks: unified video QA, unified object and point tracking, unified action and sound localisation, grounded video QA, and hour-long video QA, alongside an analysis and interpretability track that is still open for submissions. Notably, the unified video QA track introduced a novel subset that reformulates traditional perception tasks (such as point tracking and temporal action localisation) as multiple-choice video QA questions that video-language models can natively tackle. The unified object and point tracking merged the original object tracking and point tracking tasks, whereas the unified action and sound localisation merged the original temporal action localisation and temporal sound localisation tracks. Accordingly, we required competitors to use unified approaches rather than engineered pipelines with task-specific models. By proposing such a unified challenge, Perception Test 2025 highlights the significant difficulties existing models face when tackling diverse perception tasks through unified interfaces.
Problem

Research questions and friction points this paper is trying to address.

multimodal perception
unified video QA
video-language models
perception tasks
task unification
Innovation

Methods, ideas, or system contributions that make the work stand out.

unified video QA
multimodal perception
task unification
video-language models
Perception Test