Plug-and-Play Interpretable Responsible Text-to-Image Generation via Dual-Space Multi-facet Concept Control

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image (T2I) models suffer from critical shortcomings in fairness, safety, and interpretability: responsibility control is often fragmented, model-dependent, and lacks transparent, intervention mechanisms. To address this, we propose the first dual-space, multi-dimensional joint responsibility control framework that operates without fine-tuning the base model and enables plug-and-play regulation in both text embedding and diffusion latent spaces. By integrating knowledge distillation with concept whitening, our method constructs an interpretable composite responsibility space—overcoming limitations of unidimensional control and opaque interventions. Evaluated across multiple benchmarks, our approach achieves over 92% suppression rates for violent and biased content, improves responsibility concept control accuracy by 37%, preserves 99.6% of the original model’s generation quality, and supports visual attribution for fine-grained responsibility concepts.

Technology Category

Application Category

📝 Abstract
Ethical issues around text-to-image (T2I) models demand a comprehensive control over the generative content. Existing techniques addressing these issues for responsible T2I models aim for the generated content to be fair and safe (non-violent/explicit). However, these methods remain bounded to handling the facets of responsibility concepts individually, while also lacking in interpretability. Moreover, they often require alteration to the original model, which compromises the model performance. In this work, we propose a unique technique to enable responsible T2I generation by simultaneously accounting for an extensive range of concepts for fair and safe content generation in a scalable manner. The key idea is to distill the target T2I pipeline with an external plug-and-play mechanism that learns an interpretable composite responsible space for the desired concepts, conditioned on the target T2I pipeline. We use knowledge distillation and concept whitening to enable this. At inference, the learned space is utilized to modulate the generative content. A typical T2I pipeline presents two plug-in points for our approach, namely; the text embedding space and the diffusion model latent space. We develop modules for both points and show the effectiveness of our approach with a range of strong results.
Problem

Research questions and friction points this paper is trying to address.

Ensuring fair and safe text-to-image generation
Providing interpretable multi-concept control without model alteration
Developing plug-and-play mechanism for responsible content modulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play mechanism for responsible T2I generation
Interpretable composite responsible space learning
Dual-space modulation via text and latent spaces
🔎 Similar Papers
No similar papers found.
Basim Azam
Basim Azam
Postdoctoral Research Fellow at The University of Melbourne
Deep LearningComputer VisionPattern Recognition
N
Naveed Akhtar
School of Computing and Information Systems, The University of Melbourne, Australia