🤖 AI Summary
This work addresses the significant performance degradation of current image generation models when handling multiple visual references, primarily attributed to the absence of structured long-context training data. To this end, the authors introduce MacroData, a dataset comprising 400,000 samples, each containing up to ten reference images, systematically organized along four dimensions: customization, illustration, spatial reasoning, and temporal dynamics. Complementing this, they propose MacroBench, a benchmark with 4,000 curated samples for evaluation. Fine-tuning on MacroData substantially improves multi-reference image generation quality, offering the first empirical validation that large-scale structured data and cross-task collaborative training are crucial for model performance. This study thus provides foundational resources—data, benchmark, and methodology—for advancing research in multi-reference image generation.
📝 Abstract
Generating images conditioned on multiple visual references is critical for real-world applications such as multi-subject composition, narrative illustration, and novel view synthesis, yet current models suffer from severe performance degradation as the number of input references grows. We identify the root cause as a fundamental data bottleneck: existing datasets are dominated by single- or few-reference pairs and lack the structured, long-context supervision needed to learn dense inter-reference dependencies. To address this, we introduce MacroData, a large-scale dataset of 400K samples, each containing up to 10 reference images, systematically organized across four complementary dimensions -- Customization, Illustration, Spatial reasoning, and Temporal dynamics -- to provide comprehensive coverage of the multi-reference generation space. Recognizing the concurrent absence of standardized evaluation protocols, we further propose MacroBench, a benchmark of 4,000 samples that assesses generative coherence across graded task dimensions and input scales. Extensive experiments show that fine-tuning on MacroData yields substantial improvements in multi-reference generation, and ablation studies further reveal synergistic benefits of cross-task co-training and effective strategies for handling long-context complexity. The dataset and benchmark will be publicly released.