🤖 AI Summary
This work addresses the limitations of existing linear-attention-based controllable diffusion models, which struggle to efficiently integrate heterogeneous multimodal conditions and suffer from slow convergence, thereby hindering on-device privacy-preserving image generation. To overcome these challenges, we propose a novel dual-path controllable diffusion framework that abandons conventional multimodal attention mechanisms in favor of a lightweight, unified gated condition injection module. This design enables flexible fusion of both spatially aligned and misaligned multimodal conditions without incurring additional computational overhead. Extensive experiments demonstrate that our method achieves state-of-the-art controllable generation performance across multiple benchmarks, significantly improving both output fidelity and control accuracy while remaining amenable to efficient on-device deployment.
📝 Abstract
Recent advances in diffusion-based controllable visual generation have led to remarkable improvements in image quality. However, these powerful models are typically deployed on cloud servers due to their large computational demands, raising serious concerns about user data privacy. To enable secure and efficient on-device generation, we explore in this paper controllable diffusion models built upon linear attention architectures, which offer superior scalability and efficiency, even on edge devices. Yet, our experiments reveal that existing controllable generation frameworks, such as ControlNet and OminiControl, either lack the flexibility to support multiple heterogeneous condition types or suffer from slow convergence on such linear-attention models. To address these limitations, we propose a novel controllable diffusion framework tailored for linear attention backbones like SANA. The core of our method lies in a unified gated conditioning module working in a dual-path pipeline, which effectively integrates multi-type conditional inputs, such as spatially aligned and non-aligned cues. Extensive experiments on multiple tasks and benchmarks demonstrate that our approach achieves state-of-the-art controllable generation performance based on linear-attention models, surpassing existing methods in terms of fidelity and controllability.