🤖 AI Summary
To address the performance bottlenecks of autoregressive language models on complex reasoning and long-horizon planning tasks—stemming from sequential dependencies—this paper pioneers the application of discrete diffusion modeling to symbolic reasoning, proposing the Multi-Granularity Diffusion Modeling (MDM) framework. MDM explicitly models the difficulty distribution over subgoals and dynamically prioritizes learning, enabling search-free, end-to-end reasoning. By abandoning the conventional autoregressive generation paradigm, MDM circumvents long-range dependency issues and error accumulation. On the Countdown and Sudoku benchmarks, MDM achieves 91.5% and 100% accuracy, respectively—substantially outperforming autoregressive baselines (45.8% and 20.7%) without external verification or search mechanisms. Key contributions include: (i) the first use of discrete diffusion modeling for symbolic reasoning; (ii) a difficulty-aware, multi-granularity diffusion modeling paradigm; and (iii) an efficient, robust end-to-end reasoning architecture.
📝 Abstract
Autoregressive language models, despite their impressive capabilities, struggle with complex reasoning and long-term planning tasks. We introduce discrete diffusion models as a novel solution to these challenges. Through the lens of subgoal imbalance, we demonstrate how diffusion models effectively learn difficult subgoals that elude autoregressive approaches. We propose Multi-granularity Diffusion Modeling (MDM), which prioritizes subgoals based on difficulty during learning. On complex tasks like Countdown, Sudoku, and Boolean Satisfiability Problems, MDM significantly outperforms autoregressive models without using search techniques. For instance, MDM achieves 91.5% and 100% accuracy on Countdown and Sudoku, respectively, compared to 45.8% and 20.7% for autoregressive models. Our work highlights the potential of diffusion-based approaches in advancing AI capabilities for sophisticated language understanding and problem-solving tasks.