🤖 AI Summary
Diffusion models yield highly semantic internal features, yet conventional feature extraction requires manually injecting noise into input images—introducing semantic distortion and computational redundancy. To address this, we propose a novel, noise-free paradigm for high-fidelity semantic feature extraction: leveraging lightweight, unsupervised reverse-adaptation fine-tuning to directly steer a pre-trained diffusion backbone toward generating noise-agnostic, high-fidelity feature representations. Our method introduces no auxiliary modules or external supervision; instead, it achieves feature decoupling and gradient redirection through intrinsic network optimization. Extensive experiments across diverse feature extraction configurations and downstream tasks—including image classification, semantic segmentation, and content-based retrieval—demonstrate that our approach consistently outperforms noise-integration baselines. It delivers superior feature quality while reducing inference overhead by an order of magnitude.
📝 Abstract
Internal features from large-scale pre-trained diffusion models have recently been established as powerful semantic descriptors for a wide range of downstream tasks. Works that use these features generally need to add noise to images before passing them through the model to obtain the semantic features, as the models do not offer the most useful features when given images with little to no noise. We show that this noise has a critical impact on the usefulness of these features that cannot be remedied by ensembling with different random noises. We address this issue by introducing a lightweight, unsupervised fine-tuning method that enables diffusion backbones to provide high-quality, noise-free semantic features. We show that these features readily outperform previous diffusion features by a wide margin in a wide variety of extraction setups and downstream tasks, offering better performance than even ensemble-based methods at a fraction of the cost.