YUV20K: A Complexity-Driven Benchmark and Trajectory-Aware Alignment Model for Video Camouflaged Object Detection

📅 2026-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of video camouflaged object detection in complex dynamic scenes, where appearance instability and temporal feature misalignment hinder performance, compounded by the lack of high-difficulty benchmark datasets. To this end, the authors introduce YUV20K, the first large-scale pixel-level annotated benchmark specifically designed for six challenging scenarios, including large-displacement motion and camera movement. They further propose a novel framework that achieves frame-independent feature stabilization through semantic primitives and enhances temporal consistency via a trajectory-aware deformable alignment strategy. The method substantially outperforms state-of-the-art models on existing datasets and establishes a new baseline on YUV20K, demonstrating superior robustness under complex spatiotemporal variations and strong cross-domain generalization capabilities.

Technology Category

Application Category

📝 Abstract
Video Camouflaged Object Detection (VCOD) is currently constrained by the scarcity of challenging benchmarks and the limited robustness of models against erratic motion dynamics. Existing methods often struggle with Motion-Induced Appearance Instability and Temporal Feature Misalignment caused by complex motion scenarios. To address the data bottleneck, we present YUV20K, a pixel-level annoated complexity-driven VCOD benchmark. Comprising 24,295 annotated frames across 91 scenes and 47 kinds of species, it specifically targets challenging scenarios like large-displacement motion, camera motion and other 4 types scenarios. On the methodological front, we propose a novel framework featuring two key modules: Motion Feature Stabilization (MFS) and Trajectory-Aware Alignment (TAA). The MFS module utilizes frame-agnostic Semantic Basis Primitives to stablize features, while the TAA module leverages trajectory-guided deformable sampling to ensure precise temporal alignment. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art competitors on existing datasets and establishes a new baseline on the challenging YUV20K. Notably, our framework exhibits superior cross-domain generalization and robustness when confronting complex spatiotemporal scenarios. Our code and dataset will be available at https://github.com/K1NSA/YUV20K
Problem

Research questions and friction points this paper is trying to address.

Video Camouflaged Object Detection
Motion-Induced Appearance Instability
Temporal Feature Misalignment
Complex Motion Scenarios
Benchmark Scarcity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video Camouflaged Object Detection
Complexity-Driven Benchmark
Motion Feature Stabilization
Trajectory-Aware Alignment
Temporal Feature Alignment
🔎 Similar Papers
No similar papers found.