🤖 AI Summary
This study investigates when and how noise in diffusion models evolves into specific semantic concepts (e.g., “age”) and locks onto denoising trajectories, aiming to enhance controllability, reliability, and predictability of generation.
Method: We propose Prompt-Conditioned Intervention (PCI), a training-free, model-agnostic framework that injects and dynamically analyzes semantic concepts via prompt-guided interventions at multiple diffusion timesteps—without accessing internal model architecture or requiring fine-tuning.
Contribution/Results: We introduce Concept Insertion Success (CIS) as the first metric to quantify the temporal dynamics of concept emergence, revealing stage-wise sensitivity across models and concept types. Evaluated on multiple SOTA text-to-image diffusion models, PCI significantly improves the balance between semantic accuracy and content fidelity over strong baselines. Our approach establishes an interpretable, time-resolved analytical paradigm for controllable generation.
📝 Abstract
Diffusion models are usually evaluated by their final outputs, gradually denoising random noise into meaningful images. Yet, generation unfolds along a trajectory, and analyzing this dynamic process is crucial for understanding how controllable, reliable, and predictable these models are in terms of their success/failure modes. In this work, we ask the question: when does noise turn into a specific concept (e.g., age) and lock in the denoising trajectory? We propose PCI (Prompt-Conditioned Intervention) to study this question. PCI is a training-free and model-agnostic framework for analyzing concept dynamics through diffusion time. The central idea is the analysis of Concept Insertion Success (CIS), defined as the probability that a concept inserted at a given timestep is preserved and reflected in the final image, offering a way to characterize the temporal dynamics of concept formation. Applied to several state-of-the-art text-to-image diffusion models and a broad taxonomy of concepts, PCI reveals diverse temporal behaviors across diffusion models, in which certain phases of the trajectory are more favorable to specific concepts even within the same concept type. These findings also provide actionable insights for text-driven image editing, highlighting when interventions are most effective without requiring access to model internals or training, and yielding quantitatively stronger edits that achieve a balance of semantic accuracy and content preservation than strong baselines. Code is available at: https://github.com/adagorgun/PCI-Prompt-Controlled-Interventions