VINCIE: Unlocking In-context Image Editing from Video

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image editing methods rely on task-specific pipelines and expert-designed models, hindering direct learning of contextual editing capabilities from raw video. This paper introduces the first video-driven contextual image editing paradigm: it models video frames and their implicit masks as an interleaved multimodal sequence, eliminating the need for manual annotations or external segmentation/repair models. We propose a block-causal diffusion Transformer architecture trained end-to-end via a tri-agent self-supervised objective—comprising next-frame prediction, and concurrent/next-frame segmentation prediction. Our contributions are threefold: (1) the first unified modeling framework for video-to-image editing; (2) the block-causal diffusion Transformer; and (3) the first multi-round image editing benchmark. Our method achieves state-of-the-art performance on two benchmarks and generalizes effectively to multi-concept composition, narrative generation, and chained editing—demonstrating strong cross-task editing capability using only unlabeled video data.

Technology Category

Application Category

📝 Abstract
In-context image editing aims to modify images based on a contextual sequence comprising text and previously generated images. Existing methods typically depend on task-specific pipelines and expert models (e.g., segmentation and inpainting) to curate training data. In this work, we explore whether an in-context image editing model can be learned directly from videos. We introduce a scalable approach to annotate videos as interleaved multimodal sequences. To effectively learn from this data, we design a block-causal diffusion transformer trained on three proxy tasks: next-image prediction, current segmentation prediction, and next-segmentation prediction. Additionally, we propose a novel multi-turn image editing benchmark to advance research in this area. Extensive experiments demonstrate that our model exhibits strong in-context image editing capabilities and achieves state-of-the-art results on two multi-turn image editing benchmarks. Despite being trained exclusively on videos, our model also shows promising abilities in multi-concept composition, story generation, and chain-of-editing applications.
Problem

Research questions and friction points this paper is trying to address.

Develop video-based in-context image editing model
Create scalable multimodal video annotation method
Advance multi-turn image editing benchmark research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learn in-context editing directly from videos
Block-causal diffusion transformer for proxy tasks
Multi-turn image editing benchmark proposal
🔎 Similar Papers
No similar papers found.