Training Diffusion Language Models for Black-Box Optimization

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of offline black-box optimization when design variables exhibit strong bidirectional dependencies, a setting where conventional autoregressive language models struggle to capture complex inter-variable relationships effectively. To overcome this limitation, the study introduces diffusion language models to this task for the first time. The authors propose a unified prompt-response corpus format with explicit delimiter tokens to delineate field boundaries clearly and employ a two-stage post-training framework—combining masked response prediction and reinforcement learning—to align model outputs with high-value designs. This approach effectively bridges the gap between generic pretraining and the target optimization domain. Evaluated under the small-data regime of Design-Bench, the method achieves state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
We study offline black-box optimization (BBO), aiming to discover improved designs from an offline dataset of designs and labels, a problem common in robotics, DNA, and materials science with limited labeled samples. While recent work applies autoregressive LLMs to BBO by formatting tasks as natural-language prompts, their left-to-right design generation struggles to capture the strong bidirectional dependencies inherent in design problems. To address this, we propose adapting diffusion LLMs to offline BBO to leverage their bidirectional modeling capabilities. However, a domain gap exists between the natural text pre-training of diffusion LLMs and the heterogeneous signals in BBO (prompts, designs, and labels). To bridge this gap, we construct a unified prompt-response corpus and introduce delimiter tokens to explicitly mark field boundaries for domain adaptation. We further propose a two-stage post-training framework to align the diffusion LLM generation with high-label designs. The first stage performs supervised fine-tuning on the unified dataset via masked-response prediction, and the second stage adopts reinforcement learning with rewards defined by label improvements. Our method achieves state-of-the-art results on Design-Bench small-data settings.
Problem

Research questions and friction points this paper is trying to address.

black-box optimization
offline learning
design optimization
limited labeled data
bidirectional dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion language models
black-box optimization
bidirectional modeling
domain adaptation
reinforcement learning
🔎 Similar Papers
No similar papers found.
Z
Zipeng Sun
McGill University, MILA - Quebec AI Institute, Polytechnique Montreal, Canada CIFAR AI Chair, Mohamed bin Zayed University of Artificial Intelligence
C
Can Chen
MILA - Quebec AI Institute, Polytechnique Montreal, Canada CIFAR AI Chair, Mohamed bin Zayed University of Artificial Intelligence
Ye Yuan
Ye Yuan
McGill University, Mila - Quebec AI Institute
Generative ModelingBlack Box OptimizationKnowledge-Centric NLPLLMs
Haolun Wu
Haolun Wu
Researcher at Mila, McGill, Stanford | Prev. intern at Google, DeepMind, MSR
Knowledge RepresentationInformation RetrievalHuman-centric AI
J
Jiayao Gu
McGill University, MILA - Quebec AI Institute, Polytechnique Montreal, Canada CIFAR AI Chair, Mohamed bin Zayed University of Artificial Intelligence
C
Christopher Pal
MILA - Quebec AI Institute, Polytechnique Montreal, Canada CIFAR AI Chair
X
Xue Liu
McGill University, MILA - Quebec AI Institute, Mohamed bin Zayed University of Artificial Intelligence