🤖 AI Summary
Existing text-to-image diffusion models employ global test-time scaling (TTS), ignoring spatial heterogeneity in image quality—leading to computational redundancy and insufficient local defect correction. To address this, we propose LoTTS, the first training-free local TTS framework. LoTTS leverages quality-aware prompting to guide attention-difference analysis—comparing cross- and self-attention maps—to localize defective regions, generate spatial masks, and perform localized perturbation and resampling while preserving global consistency. Evaluated on SD2.1, SDXL, and FLUX, LoTTS achieves state-of-the-art performance, simultaneously enhancing both local quality and global fidelity. It reduces GPU computation overhead by 2–4× compared to Best-of-N sampling. Crucially, LoTTS is the first method to enable efficient, training-free, semantics-driven local test-time optimization—bridging a critical gap between computational efficiency and fine-grained image refinement.
📝 Abstract
Diffusion models have become the dominant paradigm in text-to-image generation, and test-time scaling (TTS) further improves quality by allocating more computation during inference. However, existing TTS methods operate at the full-image level, overlooking the fact that image quality is often spatially heterogeneous. This leads to unnecessary computation on already satisfactory regions and insufficient correction of localized defects. In this paper, we explore a new direction - Localized TTS - that adaptively resamples defective regions while preserving high-quality regions, thereby substantially reducing the search space. This paradigm poses two central challenges: accurately localizing defects and maintaining global consistency. We propose LoTTS, the first fully training-free framework for localized TTS. For defect localization, LoTTS contrasts cross- and self-attention signals under quality-aware prompts (e.g., high-quality vs. low-quality) to identify defective regions, and then refines them into coherent masks. For consistency, LoTTS perturbs only defective regions and denoises them locally, ensuring that corrections remain confined while the rest of the image remains undisturbed. Extensive experiments on SD2.1, SDXL, and FLUX demonstrate that LoTTS achieves state-of-the-art performance: it consistently improves both local quality and global fidelity, while reducing GPU cost by 2-4x compared to Best-of-N sampling. These findings establish localized TTS as a promising new direction for scaling diffusion models at inference time.