ArtCrafter: Text-Image Aligning Style Transfer via Embedding Reframing

📅 2025-01-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-guided image style transfer methods struggle to simultaneously ensure semantic fidelity, stylistic consistency, and generation diversity. To address this, we propose an embedding-reconstruction-based cross-modal alignment framework. Our method integrates the Perceiver architecture, cross-modal attention, and diffusion-based conditional sampling. Key contributions include: (1) an attention-driven multi-level style extraction module for fine-grained style modeling; (2) a text-image co-alignment enhancement component to improve cross-modal semantic consistency; and (3) an embedding-space reconstruction modulation mechanism enabling controllable style fusion. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods in style intensity control, controllability, and diversity. Moreover, it achieves superior generalization across multiple styles and enhances overall generation quality.

Technology Category

Application Category

📝 Abstract
Recent years have witnessed significant advancements in text-guided style transfer, primarily attributed to innovations in diffusion models. These models excel in conditional guidance, utilizing text or images to direct the sampling process. However, despite their capabilities, direct conditional guidance approaches often face challenges in balancing the expressiveness of textual semantics with the diversity of output results while capturing stylistic features. To address these challenges, we introduce ArtCrafter, a novel framework for text-to-image style transfer. Specifically, we introduce an attention-based style extraction module, meticulously engineered to capture the subtle stylistic elements within an image. This module features a multi-layer architecture that leverages the capabilities of perceiver attention mechanisms to integrate fine-grained information. Additionally, we present a novel text-image aligning augmentation component that adeptly balances control over both modalities, enabling the model to efficiently map image and text embeddings into a shared feature space. We achieve this through attention operations that enable smooth information flow between modalities. Lastly, we incorporate an explicit modulation that seamlessly blends multimodal enhanced embeddings with original embeddings through an embedding reframing design, empowering the model to generate diverse outputs. Extensive experiments demonstrate that ArtCrafter yields impressive results in visual stylization, exhibiting exceptional levels of stylistic intensity, controllability, and diversity.
Problem

Research questions and friction points this paper is trying to address.

Text-to-Image Style Transfer
Semantic Integrity
Stylistic Consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilevel Perceptual Attention
Information Fusion
Controllable and Diverse Image Style Transfer
🔎 Similar Papers
No similar papers found.