Content-style disentangled representation for controllable artistic image stylization and generation

📅 2024-12-19
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing artistic style transfer methods rely solely on image-based supervision, leading to modality limitation (supporting only image inputs for content or style) and incomplete content-style disentanglement (causing semantic leakage). This paper proposes the first fully disentangled multimodal controllable art generation framework. We extend WikiStyle to construct a novel multimodal dataset, design a Q-Former-guided disentangled representation learning mechanism, introduce learnable multi-step cross-attention, and integrate fine-tuning of pre-trained diffusion models with multimodal supervision. Our approach achieves the first text-and-image hybrid-driven style transfer without semantic leakage. It preserves content semantics fidelity and style consistency while significantly outperforming unimodal baselines. Quantitative and qualitative evaluations demonstrate superior expressiveness, robustness, and controllability across diverse input modalities.

Technology Category

Application Category

📝 Abstract
Controllable artistic image stylization and generation aims to render the content provided by text or image with the learned artistic style, where content and style decoupling is the key to achieve satisfactory results. However, current methods for content and style disentanglement primarily rely on image information for supervision, which leads to two problems: 1) models can only support one modality for style or content input;2) incomplete disentanglement resulting in semantic interference from the reference image. To address the above issues, this paper proposes a content-style representation disentangling method for controllable artistic image stylization and generation. We construct a WikiStyle+ dataset consists of artworks with corresponding textual descriptions for style and content. Based on the multimodal dataset, we propose a disentangled content and style representations guided diffusion model. The disentangled representations are first learned by Q-Formers and then injected into a pre-trained diffusion model using learnable multi-step cross-attention layers for better controllable stylization. This approach allows model to accommodate inputs from different modalities. Experimental results show that our method achieves a thorough disentanglement of content and style in reference images under multimodal supervision, thereby enabling a harmonious integration of content and style in the generated outputs, successfully producing style-consistent and expressive stylized images.
Problem

Research questions and friction points this paper is trying to address.

Achieve thorough content-style disentanglement for artistic stylization
Support multimodal inputs for both style and content
Prevent content leakage from reference images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal dataset for style-content disentanglement
Q-Formers learn disentangled representations
Diffusion model with multi-step cross-attention