🤖 AI Summary
Existing methods struggle to jointly ensure consistency between character generation and editing within a unified framework: generative models often lose fine-grained identity features, while editing models frequently compromise spatial controllability and instruction alignment. This paper proposes ReMix-IP, a unified framework that integrates generation and editing via semantic feature rewriting and layout-decoupled control. We introduce an ε-equivariant latent space and IP-ControlNet to jointly align semantic and spatial structures without fine-tuning. Inspired by convergence evolution and quantum decoherence, we further enhance feature consistency. The framework incorporates LLM-based multimodal reasoning, DiT architecture adaptation, and reference-image-guided semantic-layout-decoupled denoising. Experiments demonstrate significant improvements in identity fidelity, pose controllability, and instruction adherence across personalized generation, image editing, and style transfer tasks—validating the framework’s effectiveness and generalizability.
📝 Abstract
Recent advances in large-scale text-to-image diffusion models (e.g., FLUX.1) have greatly improved visual fidelity in consistent character generation and editing. However, existing methods rarely unify these tasks within a single framework. Generation-based approaches struggle with fine-grained identity consistency across instances, while editing-based methods often lose spatial controllability and instruction alignment. To bridge this gap, we propose ReMix, a unified framework for character-consistent generation and editing. It constitutes two core components: the ReMix Module and IP-ControlNet. The ReMix Module leverages the multimodal reasoning ability of MLLMs to edit semantic features of input images and adapt instruction embeddings to the native DiT backbone without fine-tuning. While this ensures coherent semantic layouts, pixel-level consistency and pose controllability remain challenging. To address this, IP-ControlNet extends ControlNet to decouple semantic and layout cues from reference images and introduces an ε-equivariant latent space that jointly denoises the reference and target images within a shared noise space. Inspired by convergent evolution and quantum decoherence,i.e., where environmental noise drives state convergence, this design promotes feature alignment in the hidden space, enabling consistent object generation while preserving identity. ReMix supports a wide range of tasks, including personalized generation, image editing, style transfer, and multi-condition synthesis. Extensive experiments validate its effectiveness and efficiency as a unified framework for character-consistent image generation and editing.