Reducing Memorisation in Generative Models via Riemannian Bayesian Inference

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a novel Bayesian inference approach that mitigates the tendency of generative models to over-memorize training data at the expense of generalization. By introducing Riemannian geometry into the parameter spaces of flow matching and diffusion models for the first time, the method constructs a variational approximate posterior that adaptively captures the local geometric structure of the loss landscape and reflects the variability of the underlying data distribution. This geometrically informed posterior effectively suppresses memorization during sampling while preserving strong generalization capabilities. Theoretical analysis and empirical experiments demonstrate that the proposed Riemannian Bayesian inference framework significantly reduces memorization in generative models, offering a new paradigm for balancing memorization and generalization.

Technology Category

Application Category

πŸ“ Abstract
Modern generative models can produce realistic samples, however, balancing memorisation and generalisation remains an open problem. We approach this challenge from a Bayesian perspective by focusing on the parameter space of flow matching and diffusion models and constructing a predictive posterior that better captures the variability of the data distribution. In particular, we capture the geometry of the loss using a Riemannian metric and leverage a flexible approximate posterior that adapts to the local structure of the loss landscape. This approach allows us to sample generative models that resemble the original model, but exhibit reduced memorisation. Empirically, we demonstrate that the proposed approach reduces memorisation while preserving generalisation. Further, we provide a theoretical analysis of our method, which explains our findings. Overall, our work illustrates how considering the geometry of the loss enables effective use of the parameter space, even for complex high-dimensional generative models.
Problem

Research questions and friction points this paper is trying to address.

memorisation
generalisation
generative models
Bayesian inference
Riemannian geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

Riemannian Bayesian Inference
Generative Models
Memorisation Reduction
Loss Geometry
Approximate Posterior
πŸ”Ž Similar Papers
No similar papers found.
J
Johanna Marie Gegenfurtner
Department of Applied Mathematics and Computer Science, Technical University of Denmark
A
Albert KjΓΈller Jacobsen
Department of Applied Mathematics and Computer Science, Technical University of Denmark
Naima Elosegui Borras
Naima Elosegui Borras
TU Berlin
Theoretical Machine LearningComputational Neuroscience
A
Alejandro Valverde Mahou
Department of Applied Mathematics and Computer Science, Technical University of Denmark
Georgios Arvanitidis
Georgios Arvanitidis
Cognitive Systems, DTU Compute, Technical University of Denmark
Machine LearningGeometry