π€ AI Summary
Existing instruction-driven image editing methods exhibit limited modeling capability for non-rigid motionβsuch as human joint articulation, object deformation, camera viewpoint shifts, and complex interactions.
Method: This paper introduces ByteMorph, the first systematic benchmark for evaluating such editing capabilities. It (1) formally defines and benchmarks instruction-based editing under non-rigid motion; (2) constructs ByteMorph-6M, a high-resolution dataset of 6 million edited image pairs, and ByteMorph-Bench, an authoritative evaluation suite covering diverse dynamic scenarios; and (3) proposes ByteMorpher, an end-to-end Diffusion Transformer model integrating motion-guided generation, hierarchical composition, and automatic semantic annotation.
Results: Extensive experiments demonstrate that ByteMorpher significantly outperforms state-of-the-art academic and commercial methods across multiple dynamic editing dimensions, establishing a new standard for instruction-driven visual generation.
π Abstract
Editing images with instructions to reflect non-rigid motions, camera viewpoint shifts, object deformations, human articulations, and complex interactions, poses a challenging yet underexplored problem in computer vision. Existing approaches and datasets predominantly focus on static scenes or rigid transformations, limiting their capacity to handle expressive edits involving dynamic motion. To address this gap, we introduce ByteMorph, a comprehensive framework for instruction-based image editing with an emphasis on non-rigid motions. ByteMorph comprises a large-scale dataset, ByteMorph-6M, and a strong baseline model built upon the Diffusion Transformer (DiT), named ByteMorpher. ByteMorph-6M includes over 6 million high-resolution image editing pairs for training, along with a carefully curated evaluation benchmark ByteMorph-Bench. Both capture a wide variety of non-rigid motion types across diverse environments, human figures, and object categories. The dataset is constructed using motion-guided data generation, layered compositing techniques, and automated captioning to ensure diversity, realism, and semantic coherence. We further conduct a comprehensive evaluation of recent instruction-based image editing methods from both academic and commercial domains.