IP-Composer: Semantic Composition of Visual Concepts

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses fine-grained visual concept composition guided by multiple source images. To overcome limitations of single-image constraints and high fine-tuning costs, we propose a training-free, text-guided multi-image semantic stitching method. Our approach constructs concept-specific subspace projections based on CLIP, mapping linguistically specified semantic regions from multiple reference images into a unified embedding space; these embeddings are then seamlessly stitched and fused via the IP-Adapter framework. This is the first method enabling training-free, cross-domain controllable disentanglement and recomposition of multi-image concepts. Experiments demonstrate substantial improvements over existing image-conditioned generation methods in both conceptual coverage breadth and spatial localization accuracy. Comprehensive qualitative and quantitative evaluations validate the effectiveness and robustness of our approach.

Technology Category

Application Category

📝 Abstract
Content creators often draw inspiration from multiple visual sources, combining distinct elements to craft new compositions. Modern computational approaches now aim to emulate this fundamental creative process. Although recent diffusion models excel at text-guided compositional synthesis, text as a medium often lacks precise control over visual details. Image-based composition approaches can capture more nuanced features, but existing methods are typically limited in the range of concepts they can capture, and require expensive training procedures or specialized data. We present IP-Composer, a novel training-free approach for compositional image generation that leverages multiple image references simultaneously, while using natural language to describe the concept to be extracted from each image. Our method builds on IP-Adapter, which synthesizes novel images conditioned on an input image's CLIP embedding. We extend this approach to multiple visual inputs by crafting composite embeddings, stitched from the projections of multiple input images onto concept-specific CLIP-subspaces identified through text. Through comprehensive evaluation, we show that our approach enables more precise control over a larger range of visual concept compositions.
Problem

Research questions and friction points this paper is trying to address.

Text lacks precise control over visual details in composition.
Existing image methods limit concept range and require costly training.
Enabling multi-image and text-guided composition without specialized data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free multi-image composition via CLIP embeddings
Composite embeddings from concept-specific CLIP subspaces
Text-guided visual concept projection from multiple references
🔎 Similar Papers
No similar papers found.