🤖 AI Summary
This work addresses the challenge that generative models face in capturing heavy-tailed distributions and compact (bounded) support structures. To this end, we propose a learnable one-dimensional noise process framework. Its core innovation lies in parameterizing the noise distribution via a differentiable quantile function, rendering the noise process itself trainable and thus enabling adaptive modeling of data statistics. Building upon this, we unify objective functions—including Flow Matching and Consistency Models—into a single end-to-end generative flow model. Experiments demonstrate significant improvements in density estimation accuracy and sample quality on multivariate heavy-tailed, bounded-support, and mixture distributions, consistently outperforming fixed-noise baselines. The method achieves simultaneous gains in both distributional fidelity and generation quality.
📝 Abstract
We introduce a general framework for constructing generative models using one-dimensional noising processes. Beyond diffusion processes, we outline examples that demonstrate the flexibility of our approach. Motivated by this, we propose a novel framework in which the 1D processes themselves are learnable, achieved by parameterizing the noise distribution through quantile functions that adapt to the data. Our construction integrates seamlessly with standard objectives, including Flow Matching and consistency models. Learning quantile-based noise naturally captures heavy tails and compact supports when present. Numerical experiments highlight both the flexibility and the effectiveness of our method.