🤖 AI Summary
Existing diffusion models for protein generation overlook spatial heterogeneity in the data distribution—i.e., pronounced regional variations in point density—leading to suboptimal denoising. This work proposes a data-dependent smoothing generation framework: first, kernel density estimation (KDE) is introduced to adaptively predict the noise scale σ for each atomic coordinate, enabling geometry-aware local denoising; second, a σ-based data-weighted score function is formulated and integrated into the Walk-Jump sampling paradigm. The method requires no additional structural priors, relying solely on data-driven noise modeling to substantially improve generation fidelity. It achieves consistent improvements across multiple structural (e.g., RMSD, TM-score) and distributional (e.g., contact map correlation, backbone dihedral statistics) metrics. These results demonstrate the critical role of data-aware noise scheduling in high-dimensional, sparse biomolecular structure generation.
📝 Abstract
Diffusion models have emerged as a powerful class of generative models by learning to iteratively reverse the noising process. Their ability to generate high-quality samples has extended beyond high-dimensional image data to other complex domains such as proteins, where data distributions are typically sparse and unevenly spread. Importantly, the sparsity itself is uneven. Empirically, we observed that while a small fraction of samples lie in dense clusters, the majority occupy regions of varying sparsity across the data space. Existing approaches largely ignore this data-dependent variability. In this work, we introduce a Data-Dependent Smoothing Walk-Jump framework that employs kernel density estimation (KDE) as a preprocessing step to estimate the noise scale $σ$ for each data point, followed by training a score model with these data-dependent $σ$ values. By incorporating local data geometry into the denoising process, our method accounts for the heterogeneous distribution of protein data. Empirical evaluations demonstrate that our approach yields consistent improvements across multiple metrics, highlighting the importance of data-aware sigma prediction for generative modeling in sparse, high-dimensional settings.