🤖 AI Summary
Existing image editing methods rely on retraining specialized models using text–image–edit triplets, resulting in poor generalization. This paper proposes the “Diffusion Bridge” framework, which requires only a single text-to-image diffusion model and one pair of real images (serving as visual prompts), eliminating the need for image-to-image models or additional training. Our key contributions are: (1) constructing a diffusion bridge across image distributions via probability-flow ODEs to enable latent-space transfer; (2) introducing the first optimization-based “visual prompting textualization”, iteratively refining text embeddings to represent editing intent without supervision; and (3) incorporating differential attention to decouple edit transformations from content-preserving regions. Experiments demonstrate state-of-the-art performance in fidelity, contextual consistency, and fine-grained control, significantly enhancing scalability and practicality of zero-shot image editing.
📝 Abstract
Visual prompt, a pair of before-and-after edited images, can convey indescribable imagery transformations and prosper in image editing. However, current visual prompt methods rely on a pretrained text-guided image-to-image generative model that requires a triplet of text, before, and after images for retraining over a text-to-image model. Such crafting triplets and retraining processes limit the scalability and generalization of editing. In this paper, we present a framework based on any single text-to-image model without reliance on the explicit image-to-image model thus enhancing the generalizability and scalability. Specifically, by leveraging the probability-flow ordinary equation, we construct a diffusion bridge to transfer the distribution between before-and-after images under the text guidance. By optimizing the text via the bridge, the framework adaptively textualizes the editing transformation conveyed by visual prompts into text embeddings without other models. Meanwhile, we introduce differential attention control during text optimization, which disentangles the text embedding from the invariance of the before-and-after images and makes it solely capture the delicate transformation and generalize to edit various images. Experiments on real images validate competitive results on the generalization, contextual coherence, and high fidelity for delicate editing with just one image pair as the visual prompt.