🤖 AI Summary
Deep generative models face a fundamental trade-off between sample fidelity and generalization: high-fidelity generation often leads to memorization of training data rather than learning the underlying manifold structure. To address this, we propose geometrically aware noise regularization within the flow matching framework, replacing conventional homogeneous isotropic Gaussian noise with locally manifold-adapted anisotropic Gaussian noise. We theoretically establish its optimality and scalability. By modeling spatially varying covariance, our method is architecture-agnostic—compatible with MLPs, CNNs, and Transformers—and inherently suited to low-data and non-uniformly sampled regimes. Extensive experiments across diverse domains—including synthetic manifolds, point clouds, single-cell genomics, motion capture, and natural images—demonstrate consistent and significant improvements over standard flow matching. Notably, gains are most pronounced under data scarcity and non-uniform sampling, where generalization performance is markedly enhanced.
📝 Abstract
Deep generative models often face a fundamental tradeoff: high sample quality can come at the cost of memorisation, where the model reproduces training data rather than generalising across the underlying data geometry. We introduce Carré du champ flow matching (CDC-FM), a generalisation of flow matching (FM), that improves the quality-generalisation tradeoff by regularising the probability path with a geometry-aware noise. Our method replaces the homogeneous, isotropic noise in FM with a spatially varying, anisotropic Gaussian noise whose covariance captures the local geometry of the latent data manifold. We prove that this geometric noise can be optimally estimated from the data and is scalable to large data. Further, we provide an extensive experimental evaluation on diverse datasets (synthetic manifolds, point clouds, single-cell genomics, animal motion capture, and images) as well as various neural network architectures (MLPs, CNNs, and transformers). We demonstrate that CDC-FM consistently offers a better quality-generalisation tradeoff. We observe significant improvements over standard FM in data-scarce regimes and in highly non-uniformly sampled datasets, which are often encountered in AI for science applications. Our work provides a mathematical framework for studying the interplay between data geometry, generalisation and memorisation in generative models, as well as a robust and scalable algorithm that can be readily integrated into existing flow matching pipelines.