🤖 AI Summary
This work investigates whether high-capacity encoders such as Transformers are indispensable for discrete graph generation and proposes GenGNN—a lightweight, modular graph generation framework based on message passing—as a viable alternative. By employing graph neural networks (GNNs) as the backbone of a discrete diffusion model, GenGNN incorporates residual connections to mitigate over-smoothing and analyzes diffusion representations from a metric space perspective. Experimental results demonstrate that GenGNN achieves over 90% validity on Tree and Planar graph benchmarks and 99.49% validity in molecular generation, while offering 2–5× faster inference than graph Transformer-based approaches. These findings underscore the framework’s efficiency, scalability, and competitive performance without relying on computationally intensive architectures.
📝 Abstract
Discrete graph generation has emerged as a powerful paradigm for modeling graph data, often relying on highly expressive neural backbones such as transformers or higher-order architectures. We revisit this design choice by introducing GenGNN, a modular message-passing framework for graph generation. Diffusion models with GenGNN achieve more than 90% validity on Tree and Planar datasets, within margins of graph transformers, at 2-5x faster inference speed. For molecule generation, DiGress with a GenGNN backbone achieves 99.49% Validity. A systematic ablation study shows the benefit provided by each GenGNN component, indicating the need for residual connections to mitigate oversmoothing on complicated graph-structure. Through scaling analyses, we apply a principled metric-space view to investigate learned diffusion representations and uncover whether GNNs can be expressive neural backbones for discrete diffusion.