🤖 AI Summary
This work addresses the challenge of jointly optimizing fairness, safety, and semantic fidelity in text-to-image generation. We propose a dual-module co-optimization framework grounded in bottleneck representations of diffusion models. Specifically, we introduce a learnable fair-fidelity dual-path transformation in the intermediate feature space and design a novel score-matching objective to enable end-to-end joint optimization of responsibility (encompassing fairness and safety) and semantic fidelity. Our method requires no fine-tuning of the backbone diffusion model and integrates seamlessly with large-scale models such as SDXL. Experiments across multiple benchmarks demonstrate over 20% improvement in the joint responsibility–coherence metric, with no degradation in image quality. To our knowledge, this is the first approach achieving synergistic optimization of fairness, safety, and semantic fidelity without compromising generation quality—thereby validating its effectiveness, generality, and scalability.
📝 Abstract
The rapid advancement of diffusion models has enabled high-fidelity and semantically rich text-to-image generation; however, ensuring fairness and safety remains an open challenge. Existing methods typically improve fairness and safety at the expense of semantic fidelity and image quality. In this work, we propose RespoDiff, a novel framework for responsible text-to-image generation that incorporates a dual-module transformation on the intermediate bottleneck representations of diffusion models. Our approach introduces two distinct learnable modules: one focused on capturing and enforcing responsible concepts, such as fairness and safety, and the other dedicated to maintaining semantic alignment with neutral prompts. To facilitate the dual learning process, we introduce a novel score-matching objective that enables effective coordination between the modules. Our method outperforms state-of-the-art methods in responsible generation by ensuring semantic alignment while optimizing both objectives without compromising image fidelity. Our approach improves responsible and semantically coherent generation by 20% across diverse, unseen prompts. Moreover, it integrates seamlessly into large-scale models like SDXL, enhancing fairness and safety. Code will be released upon acceptance.