🤖 AI Summary
Conditional diffusion models frequently suffer from semantic hallucinations due to text–image misalignment; existing post-hoc alignment evaluation strategies (e.g., Best-of-N) incur substantial computational overhead. To address this, we propose NoisyCLIP—the first method to perform prompt-latent semantic alignment detection in the early noisy latent space of the reverse diffusion process, enabling real-time misalignment identification during generation. Our approach leverages a dual-encoder architecture that transfers CLIP’s semantic representational capability to noisy latent representations, thereby circumventing the need to wait for full denoising. Experiments demonstrate that, under Best-of-N settings, NoisyCLIP achieves 98% of CLIP’s alignment accuracy while reducing inference cost by 50%. This yields significant improvements in both generation efficiency and semantic fidelity.
📝 Abstract
Conditional diffusion models rely on language-to-image alignment methods to steer the generation towards semantically accurate outputs. Despite the success of this architecture, misalignment and hallucinations remain common issues and require automatic misalignment detection tools to improve quality, for example by applying them in a Best-of-N (BoN) post-generation setting. Unfortunately, measuring the alignment after the generation is an expensive step since we need to wait for the overall generation to finish to determine prompt adherence. In contrast, this work hypothesizes that text/image misalignments can be detected early in the denoising process, enabling real-time alignment assessment without waiting for the complete generation. In particular, we propose NoisyCLIP a method that measures semantic alignment in the noisy latent space. This work is the first to explore and benchmark prompt-to-latent misalignment detection during image generation using dual encoders in the reverse diffusion process. We evaluate NoisyCLIP qualitatively and quantitatively and find it reduces computational cost by 50% while achieving 98% of CLIP alignment performance in BoN settings. This approach enables real-time alignment assessment during generation, reducing costs without sacrificing semantic fidelity.