🤖 AI Summary
This work addresses the challenge of high-fidelity, text-image-driven poster generation. We propose the first unified framework supporting arbitrary resolution, customizable layouts, and controllable content fidelity. Methodologically, we construct a hierarchical text-image–layout alignment dataset and design a systematic annotation pipeline to precisely model textual semantics and spatial hierarchies. Building upon Seedream3.0, our approach integrates progressive training, multimodal conditional control, and joint text-image encoding to enable layout-aware end-to-end generation. Extensive experiments demonstrate that our method significantly outperforms GPT-4o and SeedEdit3.0 across multiple poster benchmarks, achieving an 88.55% usability rate. The framework has been deployed in industrial applications, including ByteDance’s Jianying (CapCut).
📝 Abstract
We present DreamPoster, a Text-to-Image generation framework that intelligently synthesizes high-quality posters from user-provided images and text prompts while maintaining content fidelity and supporting flexible resolution and layout outputs. Specifically, DreamPoster is built upon our T2I model, Seedream3.0 to uniformly process different poster generating types. For dataset construction, we propose a systematic data annotation pipeline that precisely annotates textual content and typographic hierarchy information within poster images, while employing comprehensive methodologies to construct paired datasets comprising source materials (e.g., raw graphics/text) and their corresponding final poster outputs. Additionally, we implement a progressive training strategy that enables the model to hierarchically acquire multi-task generation capabilities while maintaining high-quality generation. Evaluations on our testing benchmarks demonstrate DreamPoster's superiority over existing methods, achieving a high usability rate of 88.55%, compared to GPT-4o (47.56%) and SeedEdit3.0 (25.96%). DreamPoster will be online in Jimeng and other Bytedance Apps.