🤖 AI Summary
This work addresses the limitation of existing diffusion Transformer–based motion transfer methods, which support only single-object videos and struggle to achieve fine-grained control in multi-object real-world scenarios. To overcome this, the authors propose MotionGrounder, a novel framework that enables controllable multi-object motion transfer within a DiT architecture for the first time. The approach leverages optical flow–guided motion signals to provide stable motion priors and introduces an object–text alignment loss to precisely associate textual object descriptions with their corresponding spatial regions. Furthermore, a new evaluation metric, the Object Grounding Score, is proposed to holistically assess spatial–semantic consistency. Experimental results demonstrate that MotionGrounder significantly outperforms current baselines across quantitative metrics, qualitative assessments, and human evaluations, achieving high-quality, fine-grained multi-object video generation.
📝 Abstract
Motion transfer enables controllable video generation by transferring temporal dynamics from a reference video to synthesize a new video conditioned on a target caption. However, existing Diffusion Transformer (DiT)-based methods are limited to single-object videos, restricting fine-grained control in real-world scenes with multiple objects. In this work, we introduce MotionGrounder, a DiT-based framework that firstly handles motion transfer with multi-object controllability. Our Flow-based Motion Signal (FMS) in MotionGrounder provides a stable motion prior for target video generation, while our Object-Caption Alignment Loss (OCAL) grounds object captions to their corresponding spatial regions. We further propose a new Object Grounding Score (OGS), which jointly evaluates (i) spatial alignment between source video objects and their generated counterparts and (ii) semantic consistency between each generated object and its target caption. Our experiments show that MotionGrounder consistently outperforms recent baselines across quantitative, qualitative, and human evaluations.