Improved Off-policy Reinforcement Learning in Biological Sequence Design

📅 2024-10-06
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
Biological sequence design faces dual challenges of exponentially large search spaces and prohibitively expensive experimental evaluation. Existing surrogate-based reinforcement learning methods suffer from out-of-distribution (OOD) generalization failure due to insufficient training data. To address this, we propose a robust off-policy GFlowNet framework centered on a δ-conservative search mechanism. This mechanism explicitly maps surrogate model uncertainty to search conservatism via an adaptive δ, integrated with Bernoulli masking perturbations, policy-guided denoising, and uncertainty-driven dynamic δ adjustment—balancing exploration efficiency and OOD robustness. Evaluated on DNA, RNA, protein, and peptide design tasks, our method consistently outperforms state-of-the-art approaches, especially in large-scale settings where it efficiently discovers higher-scoring sequences. Results demonstrate superior generalization capability and practical utility.

Technology Category

Application Category

📝 Abstract
Designing biological sequences with desired properties is a significant challenge due to the combinatorially vast search space and the high cost of evaluating each candidate sequence. To address these challenges, reinforcement learning (RL) methods, such as GFlowNets, utilize proxy models for rapid reward evaluation and annotated data for policy training. Although these approaches have shown promise in generating diverse and novel sequences, the limited training data relative to the vast search space often leads to the misspecification of proxy for out-of-distribution inputs. We introduce $delta$-Conservative Search, a novel off-policy search method for training GFlowNets designed to improve robustness against proxy misspecification. The key idea is to incorporate conservativeness, controlled by parameter $delta$, to constrain the search to reliable regions. Specifically, we inject noise into high-score offline sequences by randomly masking tokens with a Bernoulli distribution of parameter $delta$ and then denoise masked tokens using the GFlowNet policy. Additionally, $delta$ is adaptively adjusted based on the uncertainty of the proxy model for each data point. This enables the reflection of proxy uncertainty to determine the level of conservativeness. Experimental results demonstrate that our method consistently outperforms existing machine learning methods in discovering high-score sequences across diverse tasks-including DNA, RNA, protein, and peptide design-especially in large-scale scenarios.
Problem

Research questions and friction points this paper is trying to address.

Design biological sequences with limited evaluation budgets
Address proxy misspecification in reinforcement learning methods
Enhance robustness in off-policy search for sequence design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Off-policy search with δ-Conservative robustness enhancement
Noise injection via random token masking
Dynamic δ adaptation based on proxy uncertainty
🔎 Similar Papers
No similar papers found.