🤖 AI Summary
This work addresses the challenge of offline black-box optimization when design variables exhibit strong bidirectional dependencies, a setting where conventional autoregressive language models struggle to capture complex inter-variable relationships effectively. To overcome this limitation, the study introduces diffusion language models to this task for the first time. The authors propose a unified prompt-response corpus format with explicit delimiter tokens to delineate field boundaries clearly and employ a two-stage post-training framework—combining masked response prediction and reinforcement learning—to align model outputs with high-value designs. This approach effectively bridges the gap between generic pretraining and the target optimization domain. Evaluated under the small-data regime of Design-Bench, the method achieves state-of-the-art performance.
📝 Abstract
We study offline black-box optimization (BBO), aiming to discover improved designs from an offline dataset of designs and labels, a problem common in robotics, DNA, and materials science with limited labeled samples. While recent work applies autoregressive LLMs to BBO by formatting tasks as natural-language prompts, their left-to-right design generation struggles to capture the strong bidirectional dependencies inherent in design problems. To address this, we propose adapting diffusion LLMs to offline BBO to leverage their bidirectional modeling capabilities. However, a domain gap exists between the natural text pre-training of diffusion LLMs and the heterogeneous signals in BBO (prompts, designs, and labels). To bridge this gap, we construct a unified prompt-response corpus and introduce delimiter tokens to explicitly mark field boundaries for domain adaptation. We further propose a two-stage post-training framework to align the diffusion LLM generation with high-label designs. The first stage performs supervised fine-tuning on the unified dataset via masked-response prediction, and the second stage adopts reinforcement learning with rewards defined by label improvements. Our method achieves state-of-the-art results on Design-Bench small-data settings.