Scale Where It Matters: Training-Free Localized Scaling for Diffusion Models

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image diffusion models employ global test-time scaling (TTS), ignoring spatial heterogeneity in image quality—leading to computational redundancy and insufficient local defect correction. To address this, we propose LoTTS, the first training-free local TTS framework. LoTTS leverages quality-aware prompting to guide attention-difference analysis—comparing cross- and self-attention maps—to localize defective regions, generate spatial masks, and perform localized perturbation and resampling while preserving global consistency. Evaluated on SD2.1, SDXL, and FLUX, LoTTS achieves state-of-the-art performance, simultaneously enhancing both local quality and global fidelity. It reduces GPU computation overhead by 2–4× compared to Best-of-N sampling. Crucially, LoTTS is the first method to enable efficient, training-free, semantics-driven local test-time optimization—bridging a critical gap between computational efficiency and fine-grained image refinement.

Technology Category

Application Category

📝 Abstract
Diffusion models have become the dominant paradigm in text-to-image generation, and test-time scaling (TTS) further improves quality by allocating more computation during inference. However, existing TTS methods operate at the full-image level, overlooking the fact that image quality is often spatially heterogeneous. This leads to unnecessary computation on already satisfactory regions and insufficient correction of localized defects. In this paper, we explore a new direction - Localized TTS - that adaptively resamples defective regions while preserving high-quality regions, thereby substantially reducing the search space. This paradigm poses two central challenges: accurately localizing defects and maintaining global consistency. We propose LoTTS, the first fully training-free framework for localized TTS. For defect localization, LoTTS contrasts cross- and self-attention signals under quality-aware prompts (e.g., high-quality vs. low-quality) to identify defective regions, and then refines them into coherent masks. For consistency, LoTTS perturbs only defective regions and denoises them locally, ensuring that corrections remain confined while the rest of the image remains undisturbed. Extensive experiments on SD2.1, SDXL, and FLUX demonstrate that LoTTS achieves state-of-the-art performance: it consistently improves both local quality and global fidelity, while reducing GPU cost by 2-4x compared to Best-of-N sampling. These findings establish localized TTS as a promising new direction for scaling diffusion models at inference time.
Problem

Research questions and friction points this paper is trying to address.

Existing diffusion model scaling methods operate at full-image level inefficiently
Current approaches overlook spatially heterogeneous image quality distribution
Full-image scaling wastes computation on satisfactory regions while under-correcting defects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Localized test-time scaling for diffusion models
Training-free defect localization using attention signals
Local region perturbation and denoising for consistency
🔎 Similar Papers
No similar papers found.
Qin Ren
Qin Ren
Stony Brook University
Deep LearningMedical Image Analysis
Y
Yufei Wang
Nanyang Technological University
L
Lanqing Guo
University of Texas at Austin
W
Wen Zhang
Johns Hopkins University
Z
Zhiwen Fan
Texas A&M University
Chenyu You
Chenyu You
Assistant Professor, Stony Brook University
Machine LearningAI for HealthComputer VisionMedical Image AnalysisMultimedia