GPT-IMAGE-EDIT-1.5M: A Million-Scale, GPT-Generated Image Dataset

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The opacity of closed-source large multimodal models (e.g., GPT-4o) and their proprietary training data severely hinders reproducible research in instruction-guided image editing. Method: This work introduces Edit-1.5M—the first large-scale, publicly available instruction-guided image editing dataset—comprising 1.5 million high-quality “instruction–source image–edited image” triplets. To ensure consistency and alignment fidelity, we innovatively leverage GPT-4o to uniformly reconstruct three major benchmarks (OmniEdit, HQ-Edit, UltraEdit) via image regeneration and semantic instruction rewriting. Contribution/Results: Fine-tuning the open-source FluxKontext model on Edit-1.5M achieves a score of 7.24 on GEdit-EN—significantly narrowing the performance gap with closed-source counterparts. Edit-1.5M establishes a foundational, open, and reproducible resource for instruction-guided image editing research, enabling both data-driven advancement and transparent model evaluation.

Technology Category

Application Category

📝 Abstract
Recent advancements in large multimodal models like GPT-4o have set a new standard for high-fidelity, instruction-guided image editing. However, the proprietary nature of these models and their training data creates a significant barrier for open-source research. To bridge this gap, we introduce GPT-IMAGE-EDIT-1.5M, a publicly available, large-scale image-editing corpus containing more than 1.5 million high-quality triplets (instruction, source image, edited image). We systematically construct this dataset by leveraging the versatile capabilities of GPT-4o to unify and refine three popular image-editing datasets: OmniEdit, HQ-Edit, and UltraEdit. Specifically, our methodology involves 1) regenerating output images to enhance visual quality and instruction alignment, and 2) selectively rewriting prompts to improve semantic clarity. To validate the efficacy of our dataset, we fine-tune advanced open-source models on GPT-IMAGE-EDIT-1.5M. The empirical results are exciting, e.g., the fine-tuned FluxKontext achieves highly competitive performance across a comprehensive suite of benchmarks, including 7.24 on GEdit-EN, 3.80 on ImgEdit-Full, and 8.78 on Complex-Edit, showing stronger instruction following and higher perceptual quality while maintaining identity. These scores markedly exceed all previously published open-source methods and substantially narrow the gap to leading proprietary models. We hope the full release of GPT-IMAGE-EDIT-1.5M can help to catalyze further open research in instruction-guided image editing.
Problem

Research questions and friction points this paper is trying to address.

Lack of open-source large-scale image-editing datasets
Proprietary models limit research accessibility
Need for high-quality instruction-aligned image triplets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages GPT-4o to unify image datasets
Enhances images via regeneration and prompt rewriting
Fine-tunes models for superior editing performance