Principled Latent Diffusion for Graphs via Laplacian Autoencoders

šŸ“… 2026-01-20
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
This work addresses the scalability limitations of graph diffusion models, which suffer from quadratic computational complexity and redundant modeling of non-edges in sparse graphs, while conventional latent-space approaches struggle to achieve near-lossless reconstruction. To overcome these challenges, the authors propose LG-Flow, a novel framework that compresses graphs into linear-dimensional node embeddings via a permutation-equivariant Laplacian autoencoder, coupled with a provably complete adjacency matrix recovery mechanism. Within this low-dimensional latent space, a flow-matching-based diffusion Transformer enables efficient generation. LG-Flow is the first latent-space graph diffusion model to support near-lossless reconstruction, where a single edge error does not compromise overall validity. The method reduces latent representation dimensionality from O(N²) to O(N) and achieves up to a 1000Ɨ speedup in training, substantially alleviating the scalability bottleneck in graph diffusion models.

Technology Category

Application Category

šŸ“ Abstract
Graph diffusion models achieve state-of-the-art performance in graph generation but suffer from quadratic complexity in the number of nodes -- and much of their capacity is wasted modeling the absence of edges in sparse graphs. Inspired by latent diffusion in other modalities, a natural idea is to compress graphs into a low-dimensional latent space and perform diffusion there. However, unlike images or text, graph generation requires nearly lossless reconstruction, as even a single error in decoding an adjacency matrix can render the entire sample invalid. This challenge has remained largely unaddressed. We propose LG-Flow, a latent graph diffusion framework that directly overcomes these obstacles. A permutation-equivariant autoencoder maps each node into a fixed-dimensional embedding from which the full adjacency is provably recoverable, enabling near-lossless reconstruction for both undirected graphs and DAGs. The dimensionality of this latent representation scales linearly with the number of nodes, eliminating the quadratic bottleneck and making it feasible to train larger and more expressive models. In this latent space, we train a Diffusion Transformer with flow matching, enabling efficient and expressive graph generation. Our approach achieves competitive results against state-of-the-art graph diffusion models, while achieving up to $1000\times$ speed-up.
Problem

Research questions and friction points this paper is trying to address.

graph diffusion
quadratic complexity
sparse graphs
lossless reconstruction
latent space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Diffusion
Graph Generation
Laplacian Autoencoder
Permutation-Equivariant
Flow Matching
šŸ”Ž Similar Papers
No similar papers found.
A
Antoine Siraudin
Faculty of Computer Science, RWTH Aachen University, Aachen, Germany
Christopher Morris
Christopher Morris
RWTH Aachen University
Machine learning on graphsgraph neural networksmachine learning for discrete algorithms