Training-Free Synthetic Data Generation with Dual IP-Adapter Guidance

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Few-shot image classification suffers from severe data scarcity, and existing text-to-image diffusion-based synthesis methods typically require extensive fine-tuning or external auxiliary information. This paper proposes DIPSY: a training-free, tool-free image synthesis framework for discriminative image-to-image generation via dual IP-Adapters. Its core innovations are an extended classifier-free guidance mechanism—enabling independent control of positive and negative image conditions—and a class-similarity-driven contrastive sampling strategy. DIPSY generates high-fidelity, discriminative synthetic images end-to-end using only a few support samples. Evaluated on ten benchmark datasets—including multiple fine-grained recognition tasks—it achieves state-of-the-art or competitive performance, significantly improving few-shot generalization. Crucially, DIPSY eliminates the need for model adaptation, external tools, or manual post-filtering, offering a fully self-contained and efficient solution.

Technology Category

Application Category

📝 Abstract
Few-shot image classification remains challenging due to the limited availability of labeled examples. Recent approaches have explored generating synthetic training data using text-to-image diffusion models, but often require extensive model fine-tuning or external information sources. We present a novel training-free approach, called DIPSY, that leverages IP-Adapter for image-to-image translation to generate highly discriminative synthetic images using only the available few-shot examples. DIPSY introduces three key innovations: (1) an extended classifier-free guidance scheme that enables independent control over positive and negative image conditioning; (2) a class similarity-based sampling strategy that identifies effective contrastive examples; and (3) a simple yet effective pipeline that requires no model fine-tuning or external captioning and filtering. Experiments across ten benchmark datasets demonstrate that our approach achieves state-of-the-art or comparable performance, while eliminating the need for generative model adaptation or reliance on external tools for caption generation and image filtering. Our results highlight the effectiveness of leveraging dual image prompting with positive-negative guidance for generating class-discriminative features, particularly for fine-grained classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Generates synthetic training data without model fine-tuning
Uses dual image prompting for class-discriminative feature generation
Addresses few-shot image classification with limited labeled examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free dual IP-Adapter image generation
Independent positive-negative image conditioning control
Class similarity-based contrastive sampling strategy
🔎 Similar Papers
No similar papers found.
L
Luc Boudier
LIX, École Polytechnique, IP Paris, CNRS
L
Loris Manganelli
LIX, École Polytechnique, IP Paris, CNRS
E
Eleftherios Tsonis
LIX, École Polytechnique, IP Paris, CNRS
Nicolas Dufour
Nicolas Dufour
Ecole des Ponts & Ecole Polytechnique, IP Paris
Computer VisionMachine LearningDeep Learning
Vicky Kalogeiton
Vicky Kalogeiton
École Polytechnique, IP Paris
Computer VisionDeep Learning