🤖 AI Summary
Addressing core challenges in few-shot video object detection (FSVOD)—including poor generalization to novel classes due to scarce annotations and degraded temporal consistency under occlusion and appearance variations—this paper proposes an object-aware temporal modeling framework. Built upon the Vision Transformer architecture, our method introduces a selective feature filtering mechanism and a focus-guided feature propagation strategy, enabling robust cross-frame high-confidence feature transfer without relying on complex region proposal generation. Crucially, we explicitly couple temporal modeling with object-level feature evolution, enhancing robustness in dynamic scenes. Evaluated on four benchmarks—FSVOD-500, FSYTV-40, VidOR, and VidVRD—our approach achieves AP improvements of +3.7%, +5.3%, +4.3%, and +4.5% respectively under the 5-shot setting, and consistently outperforms state-of-the-art methods across 1-, 3-, and 10-shot configurations.
📝 Abstract
Few-shot Video Object Detection (FSVOD) addresses the challenge of detecting novel objects in videos with limited labeled examples, overcoming the constraints of traditional detection methods that require extensive training data. This task presents key challenges, including maintaining temporal consistency across frames affected by occlusion and appearance variations, and achieving novel object generalization without relying on complex region proposals, which are often computationally expensive and require task-specific training. Our novel object-aware temporal modeling approach addresses these challenges by incorporating a filtering mechanism that selectively propagates high-confidence object features across frames. This enables efficient feature progression, reduces noise accumulation, and enhances detection accuracy in a few-shot setting. By utilizing few-shot trained detection and classification heads with focused feature propagation, we achieve robust temporal consistency without depending on explicit object tube proposals. Our approach achieves performance gains, with AP improvements of 3.7% (FSVOD-500), 5.3% (FSYTV-40), 4.3% (VidOR), and 4.5 (VidVRD) in the 5-shot setting. Further results demonstrate improvements in 1-shot, 3-shot, and 10-shot configurations. We make the code public at: https://github.com/yogesh-iitj/fs-video-vit