Agents of Diffusion: Enhancing Diffusion Language Models with Multi-Agent Reinforcement Learning for Structured Data Generation (Extended Version)

📅 2026-01-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models often struggle to simultaneously achieve semantic richness and structural compliance when generating structured data such as JSON. To this end, the paper introduces multi-agent reinforcement learning into diffusion-based language models for the first time, employing a collaborative framework between a prompt-optimizing agent and a critique agent that iteratively guides the generation process through natural language feedback. This approach enhances semantic diversity while strictly preserving structural consistency, without requiring model parameter updates or handcrafted constraints. Experimental results demonstrate that the proposed framework significantly outperforms existing autoregressive and diffusion models across multiple structured text generation benchmarks, successfully unifying high-fidelity structure adherence with rich semantic variation.

Technology Category

Application Category

📝 Abstract
Generating high-quality structured data such as JSON records, remains a fundamental challenge for large language models (LLMs), particularly when semantic richness must coexist with strict schema adherence. While autoregressive LLMs offer strong structural consistency, they often struggle with semantic variation and output diversity. In contrast, diffusion language models (DLMs) introduce powerful mechanisms for semantic richness and bidirectional decoding, yet lack the inductive biases needed for reliable structure preservation. We present Agents of Diffusion (AoD), a novel framework that unifies the generative flexibility of DLMs with the reasoning capabilities of autoregressive models through language-mediated reinforcement learning. AoD frames structured text generation as a multi-agent alignment process, where a prompt optimization agent collaborates with a judge agent to iteratively guide a DLM using natural language feedback. This approach enables controllable, schema-consistent generation without modifying model parameters or relying on handcrafted constraints. AoD advances the state of controllable generation by demonstrating that diffusion models, when supervised by cooperative agents, can achieve both high semantic novelty and structural fidelity. Across multiple structured data benchmarks, AoD consistently outperforms diffusion and autoregressive baselines, establishing a new path forward for structure-aware, diversity-enhanced text synthesis.
Problem

Research questions and friction points this paper is trying to address.

structured data generation
schema adherence
semantic richness
diffusion language models
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion language models
multi-agent reinforcement learning
structured data generation
schema consistency
natural language feedback
🔎 Similar Papers
No similar papers found.
A
Aja Khanal
University of Western Ontario, London, Canada
K
Kaushik T. Ranade
University of Western Ontario, London, Canada
R
Rishabh Agrawal
University of Western Ontario, London, Canada
K
K. S. Basu
ICASSSD, Cambridge, Canada
Apurva Narayan
Apurva Narayan
Western University, University of British Columbia and University of Waterloo
Data AnalyticsMachine LearningAI for Social GoodSafety and Security in CPS