🤖 AI Summary
Existing image editing methods rely on task-specific pipelines and expert-designed models, hindering direct learning of contextual editing capabilities from raw video. This paper introduces the first video-driven contextual image editing paradigm: it models video frames and their implicit masks as an interleaved multimodal sequence, eliminating the need for manual annotations or external segmentation/repair models. We propose a block-causal diffusion Transformer architecture trained end-to-end via a tri-agent self-supervised objective—comprising next-frame prediction, and concurrent/next-frame segmentation prediction. Our contributions are threefold: (1) the first unified modeling framework for video-to-image editing; (2) the block-causal diffusion Transformer; and (3) the first multi-round image editing benchmark. Our method achieves state-of-the-art performance on two benchmarks and generalizes effectively to multi-concept composition, narrative generation, and chained editing—demonstrating strong cross-task editing capability using only unlabeled video data.
📝 Abstract
In-context image editing aims to modify images based on a contextual sequence comprising text and previously generated images. Existing methods typically depend on task-specific pipelines and expert models (e.g., segmentation and inpainting) to curate training data. In this work, we explore whether an in-context image editing model can be learned directly from videos. We introduce a scalable approach to annotate videos as interleaved multimodal sequences. To effectively learn from this data, we design a block-causal diffusion transformer trained on three proxy tasks: next-image prediction, current segmentation prediction, and next-segmentation prediction. Additionally, we propose a novel multi-turn image editing benchmark to advance research in this area. Extensive experiments demonstrate that our model exhibits strong in-context image editing capabilities and achieves state-of-the-art results on two multi-turn image editing benchmarks. Despite being trained exclusively on videos, our model also shows promising abilities in multi-concept composition, story generation, and chain-of-editing applications.