Highly Efficient Test-Time Scaling for T2I Diffusion Models with Text Embedding Perturbation

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing test-time scaling methods for text-to-image (T2I) diffusion models neglect systematic modeling of noise stochasticity—particularly the untapped potential of perturbing text embeddings. To address this, we propose **text-embedding perturbation** as a novel stochastic source, synergistically coordinated with spatial-domain noise. Through frequency-domain analysis, we demonstrate its complementary nature to conventional spatial noise; we further design a step-wise perturbation strategy and adaptively modulate perturbation strength based on model tolerance. Our method requires no additional sampling, architectural modification, or computational overhead—enabling plug-and-play integration into mainstream test-time scaling frameworks. Extensive experiments across multiple benchmarks show significant improvements in both generation quality and diversity. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Test-time scaling (TTS) aims to achieve better results by increasing random sampling and evaluating samples based on rules and metrics. However, in text-to-image(T2I) diffusion models, most related works focus on search strategies and reward models, yet the impact of the stochastic characteristic of noise in T2I diffusion models on the method's performance remains unexplored. In this work, we analyze the effects of randomness in T2I diffusion models and explore a new format of randomness for TTS: text embedding perturbation, which couples with existing randomness like SDE-injected noise to enhance generative diversity and quality. We start with a frequency-domain analysis of these formats of randomness and their impact on generation, and find that these two randomness exhibit complementary behavior in the frequency domain: spatial noise favors low-frequency components (early steps), while text embedding perturbation enhances high-frequency details (later steps), thereby compensating for the potential limitations of spatial noise randomness in high-frequency manipulation. Concurrently, text embedding demonstrates varying levels of tolerance to perturbation across different dimensions of the generation process. Specifically, our method consists of two key designs: (1) Introducing step-based text embedding perturbation, combining frequency-guided noise schedules with spatial noise perturbation. (2) Adapting the perturbation intensity selectively based on their frequency-specific contributions to generation and tolerance to perturbation. Our approach can be seamlessly integrated into existing TTS methods and demonstrates significant improvements on multiple benchmarks with almost no additional computation. Code is available at href{https://github.com/xuhang07/TEP-Diffusion}{https://github.com/xuhang07/TEP-Diffusion}.
Problem

Research questions and friction points this paper is trying to address.

Enhancing generative diversity and quality in T2I diffusion models
Addressing unexplored impact of noise randomness on TTS performance
Introducing text embedding perturbation to complement spatial noise limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces text embedding perturbation for test-time scaling
Combines frequency-guided noise schedules with spatial noise
Adapts perturbation intensity based on frequency contributions
🔎 Similar Papers
No similar papers found.
H
Hang Xu
MoE Key Lab of BIPC, USTC
Linjiang Huang
Linjiang Huang
BUAA<<CUHK<<CASIA
Computer VisionPattern RecognitionMachine Learning
F
Feng Zhao
MoE Key Lab of BIPC, USTC