Controlled Training Data Generation with Diffusion Models

📅 2024-03-22
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently generating high-quality training data required for supervised learning in text-to-image generation models. We propose the Guided Adversarial Prompts (GAP) framework—a closed-loop data generation system integrating three core mechanisms: (1) adversarial prompt optimization guided by supervised model loss, (2) target distribution alignment via feature matching or discriminator-based guidance, and (3) online feedback adaptation. GAP is the first method to synergistically couple adversarial generation with explicit distributional constraints, shifting data synthesis from open-loop, static prompting to closed-loop, adaptive refinement. Empirical evaluation across diverse settings—including multi-task learning, heterogeneous model architectures, and distribution shifts (e.g., spurious correlations, unseen domains)—demonstrates substantial improvements in downstream model generalization. Data utilization efficiency increases by up to 3.2× compared to baseline approaches.

Technology Category

Application Category

📝 Abstract
In this work, we present a method to control a text-to-image generative model to produce training data specifically"useful"for supervised learning. Unlike previous works that employ an open-loop approach and pre-define prompts to generate new data using either a language model or human expertise, we develop an automated closed-loop system which involves two feedback mechanisms. The first mechanism uses feedback from a given supervised model and finds adversarial prompts that result in image generations that maximize the model loss. While these adversarial prompts result in diverse data informed by the model, they are not informed of the target distribution, which can be inefficient. Therefore, we introduce the second feedback mechanism that guides the generation process towards a certain target distribution. We call the method combining these two mechanisms Guided Adversarial Prompts. We perform our evaluations on different tasks, datasets and architectures, with different types of distribution shifts (spuriously correlated data, unseen domains) and demonstrate the efficiency of the proposed feedback mechanisms compared to open-loop approaches.
Problem

Research questions and friction points this paper is trying to address.

Control text-to-image models for supervised training data
Automate closed-loop feedback for adversarial prompt generation
Guide generation to match target data distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated closed-loop system with feedback mechanisms
Adversarial prompts maximize supervised model loss
Guided generation towards target distribution
🔎 Similar Papers
No similar papers found.
Teresa Yeo
Teresa Yeo
Singapore-MIT Alliance for Research and Technology, MIT
Machine learning
Andrei Atanov
Andrei Atanov
EPFL
Machine LearningDeep LearningComputer Vision
H
Harold Benoit
Swiss Federal Institute of Technology Lausanne (EPFL)
A
Aleksandr Alekseev
Swiss Federal Institute of Technology Lausanne (EPFL)
R
Ruchira Ray
Swiss Federal Institute of Technology Lausanne (EPFL)
P
Pooya Esmaeil Akhoondi
Swiss Federal Institute of Technology Lausanne (EPFL)
Amir Zamir
Amir Zamir
Professor of Computer Science, EPFL
Computer VisionMachine LearningRoboticsAI