Skywork UniPic 3.0: Unified Multi-Image Composition via Sequence Modeling

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-standing challenges of consistency and generation quality in multi-image synthesis for human-object interaction (HOI) scenes. It reformulates the task as a unified sequence generation problem and introduces a multimodal fusion framework capable of flexibly processing 1–6 input images of arbitrary resolution and producing high-quality outputs. By designing an efficient data synthesis pipeline that integrates trajectory mapping and distribution matching strategies, the method significantly accelerates inference while enhancing output fidelity. Evaluated on both single-image editing and multi-image composition benchmarks, the approach achieves state-of-the-art performance, outperforming Nano-Banana and Seedream 4.0. Notably, it attains a 12.5× speedup in inference and generates high-fidelity images in as few as eight denoising steps.

Technology Category

Application Category

📝 Abstract
The recent surge in popularity of Nano-Banana and Seedream 4.0 underscores the community's strong interest in multi-image composition tasks. Compared to single-image editing, multi-image composition presents significantly greater challenges in terms of consistency and quality, yet existing models have not disclosed specific methodological details for achieving high-quality fusion. Through statistical analysis, we identify Human-Object Interaction (HOI) as the most sought-after category by the community. We therefore systematically analyze and implement a state-of-the-art solution for multi-image composition with a primary focus on HOI-centric tasks. We present Skywork UniPic 3.0, a unified multimodal framework that integrates single-image editing and multi-image composition. Our model supports an arbitrary (1~6) number and resolution of input images, as well as arbitrary output resolutions (within a total pixel budget of 1024x1024). To address the challenges of multi-image composition, we design a comprehensive data collection, filtering, and synthesis pipeline, achieving strong performance with only 700K high-quality training samples. Furthermore, we introduce a novel training paradigm that formulates multi-image composition as a sequence-modeling problem, transforming conditional generation into unified sequence synthesis. To accelerate inference, we integrate trajectory mapping and distribution matching into the post-training stage, enabling the model to produce high-fidelity samples in just 8 steps and achieve a 12.5x speedup over standard synthesis sampling. Skywork UniPic 3.0 achieves state-of-the-art performance on single-image editing benchmark and surpasses both Nano-Banana and Seedream 4.0 on multi-image composition benchmark, thereby validating the effectiveness of our data pipeline and training paradigm. Code, models and dataset are publicly available.
Problem

Research questions and friction points this paper is trying to address.

multi-image composition
Human-Object Interaction
image consistency
high-fidelity synthesis
arbitrary input resolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

sequence modeling
multi-image composition
trajectory mapping
distribution matching
unified multimodal framework
🔎 Similar Papers
No similar papers found.