Test-Time Alignment of Text-to-Image Diffusion Models via Null-Text Embedding Optimisation

πŸ“… 2025-11-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing test-time adaptation (TTA) methods for diffusion models suffer from reward hacking due to over-optimization of the target reward or produce semantically distorted outputs by neglecting underlying semantic structure. To address these issues, we propose Null-TTAβ€”the first TTA framework that, during inference, performs gradient-based updates solely on the unconditional (null-text) embedding without modifying model parameters. Leveraging the intrinsic semantic manifold of the text embedding space, Null-TTA guides the generative distribution toward alignment with the target reward. Built upon classifier-free guidance, it avoids exploitation of non-semantic noise, thereby preserving semantic consistency and generation fidelity. Experiments demonstrate that Null-TTA achieves state-of-the-art TTA performance across diverse reward signals and exhibits strong cross-reward generalization, significantly outperforming existing TTA approaches.

Technology Category

Application Category

πŸ“ Abstract
Test-time alignment (TTA) aims to adapt models to specific rewards during inference. However, existing methods tend to either under-optimise or over-optimise (reward hack) the target reward function. We propose Null-Text Test-Time Alignment (Null-TTA), which aligns diffusion models by optimising the unconditional embedding in classifier-free guidance, rather than manipulating latent or noise variables. Due to the structured semantic nature of the text embedding space, this ensures alignment occurs on a semantically coherent manifold and prevents reward hacking (exploiting non-semantic noise patterns to improve the reward). Since the unconditional embedding in classifier-free guidance serves as the anchor for the model's generative distribution, Null-TTA directly steers model's generative distribution towards the target reward rather than just adjusting the samples, even without updating model parameters. Thanks to these desirable properties, we show that Null-TTA achieves state-of-the-art target test-time alignment while maintaining strong cross-reward generalisation. This establishes semantic-space optimisation as an effective and principled novel paradigm for TTA.
Problem

Research questions and friction points this paper is trying to address.

Aligns text-to-image models during inference without reward hacking
Optimizes unconditional embeddings in classifier-free guidance framework
Maintains semantic coherence while achieving target reward alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes unconditional embedding in classifier-free guidance
Aligns diffusion models on semantically coherent manifold
Directly steers generative distribution without parameter updates
πŸ”Ž Similar Papers
No similar papers found.
T
Taehoon Kim
School of Informatics, University of Edinburgh
Henry Gouk
Henry Gouk
Assistant Professor, University of Edinburgh
Artificial IntelligenceMachine LearningAI EngineeringTrustworthy AI
T
Timothy Hospedales
School of Informatics, University of Edinburgh, Samsung AI Center, Cambridge