🤖 AI Summary
To address the challenge of automatic discovery and natural language description of video motion patterns without user queries—particularly under appearance-degraded conditions such as occlusion, camouflage, and rapid motion—this paper proposes the first motion-aligned contrastive vision-language representation framework. Methodologically, we design a motion-field attention mechanism that explicitly aligns optical flow trajectories with textual semantic space; introduce a query-free multi-motion expression discovery module driven by multi-head cross-modal attention; and jointly optimize global video–text matching and fine-grained trajectory–text spatial grounding. Evaluated on the MeViS benchmark, our approach achieves 58.4% accuracy in video–text retrieval, a J&F spatial localization score of 64.9, and discovers an average of 4.8 high-quality motion expressions per video (with 84.7% precision), significantly enhancing cross-task generalization capability.
📝 Abstract
We propose Track and Caption Any Motion (TCAM), a motion-centric framework for automatic video understanding that discovers and describes motion patterns without user queries. Understanding videos in challenging conditions like occlusion, camouflage, or rapid movement often depends more on motion dynamics than static appearance. TCAM autonomously observes a video, identifies multiple motion activities, and spatially grounds each natural language description to its corresponding trajectory through a motion-field attention mechanism. Our key insight is that motion patterns, when aligned with contrastive vision-language representations, provide powerful semantic signals for recognizing and describing actions. Through unified training that combines global video-text alignment with fine-grained spatial correspondence, TCAM enables query-free discovery of multiple motion expressions via multi-head cross-attention. On the MeViS benchmark, TCAM achieves 58.4% video-to-text retrieval, 64.9 JF for spatial grounding, and discovers 4.8 relevant expressions per video with 84.7% precision, demonstrating strong cross-task generalization.