Kernel Density Steering: Inference-Time Scaling via Mode Seeking for Image Restoration

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models for image inpainting often suffer from inconsistent fidelity and prominent artifacts. To address this, we propose a plug-and-play, inference-time scalable framework guided by kernel density estimation—requiring no model retraining. Our method introduces an N-particle diffusion sampling ensemble coupled with block-wise kernel density estimation to construct a particle-set-based local gradient guidance mechanism, which leverages collective distributional statistics to suppress artifacts and enhance local consistency. Evaluated on real-world super-resolution and image completion tasks, the approach significantly improves quantitative metrics—including PSNR and LPIPS—while yielding perceptually more realistic and structurally coherent outputs. Extensive experiments demonstrate its robustness and strong generalization across diverse degradation patterns and architectures, validating both effectiveness and practicality without architectural or training modifications.

Technology Category

Application Category

📝 Abstract
Diffusion models show promise for image restoration, but existing methods often struggle with inconsistent fidelity and undesirable artifacts. To address this, we introduce Kernel Density Steering (KDS), a novel inference-time framework promoting robust, high-fidelity outputs through explicit local mode-seeking. KDS employs an $N$-particle ensemble of diffusion samples, computing patch-wise kernel density estimation gradients from their collective outputs. These gradients steer patches in each particle towards shared, higher-density regions identified within the ensemble. This collective local mode-seeking mechanism, acting as "collective wisdom", steers samples away from spurious modes prone to artifacts, arising from independent sampling or model imperfections, and towards more robust, high-fidelity structures. This allows us to obtain better quality samples at the expense of higher compute by simultaneously sampling multiple particles. As a plug-and-play framework, KDS requires no retraining or external verifiers, seamlessly integrating with various diffusion samplers. Extensive numerical validations demonstrate KDS substantially improves both quantitative and qualitative performance on challenging real-world super-resolution and image inpainting tasks.
Problem

Research questions and friction points this paper is trying to address.

Improves image restoration fidelity in diffusion models
Reduces artifacts via local mode-seeking with N-particle ensemble
Enhances super-resolution and inpainting without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kernel Density Steering for robust outputs
Patch-wise kernel density estimation gradients
Plug-and-play framework with no retraining
🔎 Similar Papers
No similar papers found.