🤖 AI Summary
To address the high computational overhead and reliance on external discriminators in test-time scaling (TTS) for diffusion-based multimodal large language models (dMLLMs), this work proposes a dual-axis collaborative scaling framework that enhances generative diversity via trajectory exploration and improves output stability through iterative refinement. Our key contributions are: (1) a hierarchical adaptive search algorithm with reduced time complexity of O(N+T); and (2) an endogenous self-verification feedback mechanism that quantifies image-text alignment without external discriminators. The method integrates diffusion modeling, multimodal reasoning, trajectory sampling optimization, and hierarchical pruning/expansion strategies. Evaluated on the GenEval benchmark, our approach achieves significant gains in generation quality and attains 6× higher inference efficiency compared to linear search, while maintaining full compatibility with diverse dMLLM architectures—including Lumina-DiMOO, MMaDA, and Muddit.
📝 Abstract
Diffusion Multi-modal Large Language Models (dMLLMs) have recently emerged as a novel architecture unifying image generation and understanding. However, developing effective and efficient Test-Time Scaling (TTS) methods to unlock their full generative potential remains an underexplored challenge. To address this, we propose dMLLM-TTS, a novel framework operating on two complementary scaling axes: (1) trajectory exploration scaling to enhance the diversity of generated hypotheses, and (2) iterative refinement scaling for stable generation. Conventional TTS approaches typically perform linear search across these two dimensions, incurring substantial computational costs of O(NT) and requiring an external verifier for best-of-N selection. To overcome these limitations, we propose two innovations. First, we design an efficient hierarchical search algorithm with O(N+T) complexity that adaptively expands and prunes sampling trajectories. Second, we introduce a self-verified feedback mechanism that leverages the dMLLMs' intrinsic image understanding capabilities to assess text-image alignment, eliminating the need for external verifier. Extensive experiments on the GenEval benchmark across three representative dMLLMs (e.g., Lumina-DiMOO, MMaDA, Muddit) show that our framework substantially improves generation quality while achieving up to 6x greater efficiency than linear search. Project page: https://github.com/Alpha-VLLM/Lumina-DiMOO.