🤖 AI Summary
Text-to-image diffusion models suffer from limited generation diversity and susceptibility to reproducing training-set images. This paper proposes SPELL—a training-free, sparsely activated, trajectory-level repulsion mechanism. During diffusion sampling, SPELL dynamically injects lightweight repulsive forces along stochastic differential equation (SDE) trajectories based on nearest-neighbor distances, steering generations away from a specified protected image set—supporting both static protection sets and batch-wise self-updating reference sets. Implemented via SDE path perturbation, SPELL requires no model fine-tuning. Evaluated across multiple state-of-the-art diffusion models, SPELL significantly improves generation diversity (with only marginal FID degradation) and, for the first time, enables efficient, training-free shielding against the full ImageNet dataset (1.2M images). It outperforms existing training-free diversity-enhancement methods in both efficacy and scalability.
📝 Abstract
The increased adoption of diffusion models in text-to-image generation has triggered concerns on their reliability. Such models are now closely scrutinized under the lens of various metrics, notably calibration, fairness, or compute efficiency. We focus in this work on two issues that arise when deploying these models: a lack of diversity when prompting images, and a tendency to recreate images from the training set. To solve both problems, we propose a method that coaxes the sampled trajectories of pretrained diffusion models to land on images that fall outside of a reference set. We achieve this by adding repellency terms to the diffusion SDE throughout the generation trajectory, which are triggered whenever the path is expected to land too closely to an image in the shielded reference set. Our method is sparse in the sense that these repellency terms are zero and inactive most of the time, and even more so towards the end of the generation trajectory. Our method, named SPELL for sparse repellency, can be used either with a static reference set that contains protected images, or dynamically, by updating the set at each timestep with the expected images concurrently generated within a batch. We show that adding SPELL to popular diffusion models improves their diversity while impacting their FID only marginally, and performs comparatively better than other recent training-free diversity methods. We also demonstrate how SPELL can ensure a shielded generation away from a very large set of protected images by considering all 1.2M images from ImageNet as the protected set.