🤖 AI Summary
This work addresses the challenge of multi-concept image editing with LoRA models while preserving global scene coherence—such as lighting, structure, and fine details—without retraining. The proposed method leverages the spatial coherence of concept-specific features in early layers of a Rectified Flow Transformer. Through forward activation analysis, it generates semantically aligned, concept-level spatial masks, which are then used to weight and fuse corresponding LoRA adapter parameters. This constitutes the first training-free framework for multi-concept editing, enabling localized semantic control while maintaining global contextual stability. Experiments demonstrate substantial improvements in identity and detail fidelity, seamless editing of multiple subjects and styles, and superior trade-offs between interactivity and output quality. The approach effectively realizes high-fidelity, composable, and controllable image synthesis—functioning as a “Photoshop for LoRA.”
📝 Abstract
We introduce LoRAShop, the first framework for multi-concept image editing with LoRA models. LoRAShop builds on a key observation about the feature interaction patterns inside Flux-style diffusion transformers: concept-specific transformer features activate spatially coherent regions early in the denoising process. We harness this observation to derive a disentangled latent mask for each concept in a prior forward pass and blend the corresponding LoRA weights only within regions bounding the concepts to be personalized. The resulting edits seamlessly integrate multiple subjects or styles into the original scene while preserving global context, lighting, and fine details. Our experiments demonstrate that LoRAShop delivers better identity preservation compared to baselines. By eliminating retraining and external constraints, LoRAShop turns personalized diffusion models into a practical `photoshop-with-LoRAs' tool and opens new avenues for compositional visual storytelling and rapid creative iteration.