🤖 AI Summary
Existing text-to-image methods produce synthetic data with insufficient fidelity and diversity, limiting their effectiveness in boosting supervised learning performance. This paper proposes LoFT, a framework that performs image-level LoRA fine-tuning of diffusion models using only 8–64 real images per class; during inference, it fuses LoRA weights across images to precisely reconstruct fine-grained features of real samples. LoFT introduces the first synergistic mechanism integrating single-image fine-tuning with cross-image weight fusion, overcoming the long-standing trade-off between fidelity and diversity in few-shot generation. Evaluated on ten benchmark datasets, LoFT-generated synthetic datasets (≈1,000 images per class) consistently improve downstream classification accuracy, outperforming all state-of-the-art synthetic-data methods. Crucially, its performance gain scales monotonically with increasing real-sample count, demonstrating robust few-shot scalability.
📝 Abstract
Despite recent advances in text-to-image generation, using synthetically generated data seldom brings a significant boost in performance for supervised learning. Oftentimes, synthetic datasets do not faithfully recreate the data distribution of real data, i.e., they lack the fidelity or diversity needed for effective downstream model training. While previous work has employed few-shot guidance to address this issue, existing methods still fail to capture and generate features unique to specific real images. In this paper, we introduce a novel dataset generation framework named LoFT, LoRA-Fused Training-data Generation with Few-shot Guidance. Our method fine-tunes LoRA weights on individual real images and fuses them at inference time, producing synthetic images that combine the features of real images for improved diversity and fidelity of generated data. We evaluate the synthetic data produced by LoFT on 10 datasets, using 8 to 64 real images per class as guidance and scaling up to 1000 images per class. Our experiments show that training on LoFT-generated data consistently outperforms other synthetic dataset methods, significantly increasing accuracy as the dataset size increases. Additionally, our analysis demonstrates that LoFT generates datasets with high fidelity and sufficient diversity, which contribute to the performance improvement. The code is available at https://github.com/ExplainableML/LoFT.