Graph Guided Diffusion: Unified Guidance for Conditional Graph Generation

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph diffusion models struggle to effectively guide the generation of discrete, combinatorial graph structures under arbitrary (differentiable or non-differentiable) reward signals. Method: This paper reformulates conditional graph generation as a stochastic control problem and introduces a zero-shot, plug-and-play diffusion guidance framework. It unifies gradient-based guidance, control-signal-driven guidance, and zeroth-order optimization via diffusion process reparameterization and forward reward evaluation—enabling end-to-end, gradient-free intervention. Contribution/Results: Theoretically and empirically, the method overcomes the guidance bottleneck under non-differentiable rewards. Experiments on motif-constrained generation, fairness-aware graph synthesis, and link prediction demonstrate substantial improvements in reward alignment, while preserving both structural fidelity and generative diversity.

Technology Category

Application Category

📝 Abstract
Diffusion models have emerged as powerful generative models for graph generation, yet their use for conditional graph generation remains a fundamental challenge. In particular, guiding diffusion models on graphs under arbitrary reward signals is difficult: gradient-based methods, while powerful, are often unsuitable due to the discrete and combinatorial nature of graphs, and non-differentiable rewards further complicate gradient-based guidance. We propose Graph Guided Diffusion (GGDiff), a novel guidance framework that interprets conditional diffusion on graphs as a stochastic control problem to address this challenge. GGDiff unifies multiple guidance strategies, including gradient-based guidance (for differentiable rewards), control-based guidance (using control signals from forward reward evaluations), and zero-order approximations (bridging gradient-based and gradient-free optimization). This comprehensive, plug-and-play framework enables zero-shot guidance of pre-trained diffusion models under both differentiable and non-differentiable reward functions, adapting well-established guidance techniques to graph generation--a direction largely unexplored. Our formulation balances computational efficiency, reward alignment, and sample quality, enabling practical conditional generation across diverse reward types. We demonstrate the efficacy of GGDiff in various tasks, including constraints on graph motifs, fairness, and link prediction, achieving superior alignment with target rewards while maintaining diversity and fidelity.
Problem

Research questions and friction points this paper is trying to address.

Addressing conditional graph generation challenges with diffusion models
Unifying guidance strategies for differentiable and non-differentiable rewards
Enabling practical graph generation with reward alignment and diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic control for graph diffusion
Unified gradient and control guidance
Zero-shot adaptation for diverse rewards
🔎 Similar Papers
No similar papers found.