Latent-DARM: Bridging Discrete Diffusion And Autoregressive Models For Reasoning

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively coordinating heterogeneous language models in multi-agent systems, where autoregressive models lack global planning capabilities and discrete diffusion models—despite their reasoning strengths—suffer from poor text fluency. To bridge this gap, the authors propose Latent-DARM, a novel framework that establishes communication between these models in a shared latent space. By aligning their latent representations, Latent-DARM enables efficient collaboration between a discrete diffusion model (acting as planner) and an autoregressive model (as executor), synergistically combining non-sequential and sequential generation paradigms. This approach achieves strong global reasoning and high linguistic fluency with minimal token consumption. Experiments demonstrate significant performance gains, reaching 36.0% and 14.0% accuracy on the DART-5 and AIME2024 benchmarks, respectively, while using less than 2.2% of the token budget required by state-of-the-art reasoning models.

Technology Category

Application Category

📝 Abstract
Most multi-agent systems rely exclusively on autoregressive language models (ARMs) that are based on sequential generation. Although effective for fluent text, ARMs limit global reasoning and plan revision. On the other hand, Discrete Diffusion Language Models (DDLMs) enable non-sequential, globally revisable generation and have shown strong planning capabilities, but their limited text fluency hinders direct collaboration with ARMs. We introduce Latent-DARM, a latent-space communication framework bridging DDLM (planners) and ARM (executors), maximizing collaborative benefits. Across mathematical, scientific, and commonsense reasoning benchmarks, Latent-DARM outperforms text-based interfaces on average, improving accuracy from 27.0% to 36.0% on DART-5 and from 0.0% to 14.0% on AIME2024. Latent-DARM approaches the results of state-of-the-art reasoning models while using less than 2.2% of its token budget. This work advances multi-agent collaboration among agents with heterogeneous models.
Problem

Research questions and friction points this paper is trying to address.

multi-agent collaboration
autoregressive models
discrete diffusion models
reasoning
heterogeneous models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent-DARM
Discrete Diffusion Language Models
Autoregressive Models
Multi-agent Collaboration
Latent-space Communication
🔎 Similar Papers
No similar papers found.