🤖 AI Summary
To address the performance limitations of object detectors under few-shot learning conditions, this paper proposes a diffusion-based conditional synthetic data generation method and systematically compares prompt-driven versus layout-driven control strategies for synthetic data quality. Experiments across four standard benchmarks and 80 visual concepts demonstrate that aligning synthetic layouts with the real training distribution significantly improves detection performance—yielding up to a 177% mAP gain and an average 34% improvement. The study further reveals a fundamental trade-off between concept diversity and conditioning strategy efficacy. By decoupling high-fidelity simulation from computational efficiency, our approach overcomes a longstanding bottleneck in synthetic data generation. It establishes a reproducible, interpretable, and controllable synthesis paradigm for robust industrial vision systems operating under data-scarce conditions.
📝 Abstract
Learning robust object detectors from only a handful of images is a critical challenge in industrial vision systems, where collecting high quality training data can take months. Synthetic data has emerged as a key solution for data efficient visual inspection and pick and place robotics. Current pipelines rely on 3D engines such as Blender or Unreal, which offer fine control but still require weeks to render a small dataset, and the resulting images often suffer from a large gap between simulation and reality. Diffusion models promise a step change because they can generate high quality images in minutes, yet precise control, especially in low data regimes, remains difficult. Although many adapters now extend diffusion beyond plain text prompts, the effect of different conditioning schemes on synthetic data quality is poorly understood. We study eighty diverse visual concepts drawn from four standard object detection benchmarks and compare two conditioning strategies: prompt based and layout based. When the set of conditioning cues is narrow, prompt conditioning yields higher quality synthetic data; as diversity grows, layout conditioning becomes superior. When layout cues match the full training distribution, synthetic data raises mean average precision by an average of thirty four percent and by as much as one hundred seventy seven percent compared with using real data alone.