DeltaEdit: Exploring Text-free Training for Text-Driven Image Manipulation

📅 2023-03-11
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 41
Influential: 7
📄 PDF
🤖 AI Summary
Text-driven image editing faces two key bottlenecks: reliance on large-scale annotated datasets or prompt-specific optimization per text instruction. This paper proposes DeltaEdit—the first zero-shot, text-supervision-free image editing framework. Its core innovation is the construction of a CLIP delta space, enabling the first explicit alignment between visual feature differences and textual semantic differences, thereby eliminating the need for textual annotations or prompt-level optimization. Methodologically, DeltaEdit models StyleGAN latent-space editing directions as lightweight mappings from CLIP visual/text embedding deltas, learned end-to-end via a dedicated DeltaEdit network. Extensive evaluation on FFHQ, AFHQ, and other benchmarks demonstrates high-fidelity and generalizable editing: arbitrary novel text prompts can be applied out-of-the-box—without fine-tuning, retraining, or hyperparameter adjustment.
📝 Abstract
Text-driven image manipulation remains challenging in training or inference flexibility. Conditional generative models depend heavily on expensive annotated training data. Meanwhile, recent frameworks, which leverage pre-trained vision-language models, are limited by either per text-prompt optimization or inference-time hyper-parameters tuning. In this work, we propose a novel framework named DeltaEdit to address these problems. Our key idea is to investigate and identify a space, namely delta image and text space that has well-aligned distribution between CLIP visual feature differences of two images and CLIP textual embedding differences of source and target texts. Based on the CLIP delta space, the DeltaEdit network is designed to map the CLIP visual features differences to the editing directions of StyleGAN at training phase. Then, in inference phase, DeltaEdit predicts the StyleGAN's editing directions from the differences of the CLIP textual features. In this way, DeltaEdit is trained in a text-free manner. Once trained, it can well generalize to various text prompts for zero-shot inference without bells and whistles. Code is available at https://github.com/Yueming6568/DeltaEdit.
Problem

Research questions and friction points this paper is trying to address.

Text-driven image manipulation lacks training flexibility
Conditional models need costly annotated training data
Existing frameworks require per-prompt optimization or tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-free training using CLIP delta space
Mapping visual differences to StyleGAN directions
Zero-shot inference with various text prompts
Y
Yueming Lyu
School of Artificial Intelligence, University of Chinese Academy of Sciences
Tianwei Lin
Tianwei Lin
Zhejiang University
MLLMs
F
Fu Li
VIS, Baidu Inc.
Dongliang He
Dongliang He
ByteDance Inc.
Computer VisionDeep LearningMultimedia
J
Jing Dong
CRIPAC, Institute of Automation, Chinese Academy of Sciences
Tieniu Tan
Tieniu Tan
Institute of Automation, Chinese Academy of Sciences