π€ AI Summary
To address performance bottlenecks in vision tasks caused by long-tailed class distributions and data imbalance, this paper proposes an end-to-end synthetic data augmentation framework. It leverages large language models (LLMs) to autonomously generate semantic descriptions and layout constraints, which guide controllable diffusion models to synthesize high-fidelity, diverse images. We introduce the Composite Layout and Image Scoring (CLIS) metricβa novel, annotation-free evaluation criterion that enables semantic-prior-guided distillation and selection of synthetic data, and establishes a quantifiable correlation between CLIS scores and downstream performance gains. On long-tailed object detection and segmentation benchmarks, our method improves mean average precision by 12.3%. Empirical analysis confirms a strong positive correlation between CLIS scores and model performance gains, validating CLIS as an effective, generalizable metric for synthetic data assessment.
π Abstract
Diffusion models can generate realistic and diverse images, potentially facilitating data availability for data-intensive perception tasks. However, leveraging these models to boost performance on downstream tasks with synthetic data poses several challenges, including aligning with real data distribution, scaling synthetic sample volumes, and ensuring their quality. To bridge these gaps, we present extbf{A}uto extbf{C}herry- extbf{P}icker (ACP), a novel framework that generates high-quality cross-modality training samples at scale to augment perception and multi-modal training. ACP first uses LLMs to sample descriptions and layouts based on object combinations from real data priors, eliminating the need for ground truth image captions or annotations. Next, we use an off-the-shelf controllable diffusion model to generate multiple images. Then, the generated data are refined using a comprehensively designed metric, Composite Layout and Image Score (CLIS), to ensure quality. Our customized synthetic high-quality samples boost performance in various scenarios, especially in addressing challenges associated with long-tailed distribution and imbalanced datasets. Experiment results on downstream tasks demonstrate that ACP can significantly improve the performance of existing models. In addition, we find a positive correlation between CLIS and performance gains in downstream tasks. This finding shows the potential for evaluation metrics as the role for various visual perception and MLLM tasks.