🤖 AI Summary
This work addresses the scarcity of instruction-image paired data and poor generalization in iterative language-driven image editing. We propose the first self-supervised counterfactual reasoning framework (SSCR), which eliminates the need for human annotations by modeling counterfactual editing trajectories and enforcing cross-task consistency (CTC) constraints to enable self-supervised training across multiple editing steps. Its core innovation lies in embedding human-like counterfactual reasoning into an iterative decoding architecture, substantially reducing reliance on large-scale paired data. On the i-CLEVR and CoDraw benchmarks, SSCR achieves state-of-the-art performance: object identity and position editing accuracy improve by 12.3% and 9.7%, respectively. Remarkably, it attains full-data performance using only 50% of the training data, demonstrating superior generalization capability and data efficiency.
📝 Abstract
Iterative Language-Based Image Editing (IL-BIE) tasks follow iterative instructions to edit images step by step. Data scarcity is a significant issue for ILBIE as it is challenging to collect large-scale examples of images before and after instruction-based changes. However, humans still accomplish these editing tasks even when presented with an unfamiliar image-instruction pair. Such ability results from counterfactual thinking and the ability to think about alternatives to events that have happened already. In this paper, we introduce a Self-Supervised Counterfactual Reasoning (SSCR) framework that incorporates counterfactual thinking to overcome data scarcity. SSCR allows the model to consider out-of-distribution instructions paired with previous images. With the help of cross-task consistency (CTC), we train these counterfactual instructions in a self-supervised scenario. Extensive results show that SSCR improves the correctness of ILBIE in terms of both object identity and position, establishing a new state of the art (SOTA) on two IBLIE datasets (i-CLEVR and CoDraw). Even with only 50% of the training data, SSCR achieves a comparable result to using complete data.