π€ AI Summary
This paper addresses zero-shot video object tracking without fine-tuning, proposing the DRIFT framework. Methodologically, it leverages the inherent cross-frame semantic propagation capability encoded in the self-attention maps of image diffusion models (e.g., Stable Diffusion) as pixel-level label propagation kernels. Specifically, inter-frame attention correlations are extracted via DDIM inversion; target specificity is enhanced through text inversion and adaptive head weighting; and segmentation accuracy is improved via SAM-guided mask refinement. Crucially, this work is the first to systematically uncover and exploit the temporal modeling potential embedded in diffusion modelsβ self-attention mechanisms for fully zero-shot, training-free video semantic propagation. On standard benchmarks including DAVIS, DRIFT achieves state-of-the-art zero-shot performance, significantly improving the robustness and consistency of cross-frame label propagation.
π Abstract
Image diffusion models, though originally developed for image generation, implicitly capture rich semantic structures that enable various recognition and localization tasks beyond synthesis. In this work, we investigate their self-attention maps can be reinterpreted as semantic label propagation kernels, providing robust pixel-level correspondences between relevant image regions. Extending this mechanism across frames yields a temporal propagation kernel that enables zero-shot object tracking via segmentation in videos. We further demonstrate the effectiveness of test-time optimization strategies-DDIM inversion, textual inversion, and adaptive head weighting-in adapting diffusion features for robust and consistent label propagation. Building on these findings, we introduce DRIFT, a framework for object tracking in videos leveraging a pretrained image diffusion model with SAM-guided mask refinement, achieving state-of-the-art zero-shot performance on standard video object segmentation benchmarks.