Enhancing Reinforcement learning in 3-Dimensional Hydrophobic-Polar Protein Folding Model with Attention-based layers

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the 3D HP protein folding problem using an attention-enhanced reinforcement learning (RL) approach. To overcome the limitations of conventional RL in modeling long-range dependencies, we introduce the Transformer architecture into a deep Q-network (DQN) for 3D lattice-based protein folding, optimizing hydrophobic core formation under self-avoiding walk constraints. Our method innovatively integrates symmetry-breaking constraints, double Q-learning, and prioritized experience replay, along with a customized hydrophobic reward function. Experiments demonstrate that the model successfully recovers multiple known optimal conformations on standard benchmark sequences and achieves near-optimal performance on longer chains. This study constitutes the first empirical validation of an attention-based RL framework for 3D protein structure prediction, establishing a novel data-driven paradigm for computational protein folding.

Technology Category

Application Category

📝 Abstract
Transformer-based architectures have recently propelled advances in sequence modeling across domains, but their application to the hydrophobic-hydrophilic (H-P) model for protein folding remains relatively unexplored. In this work, we adapt a Deep Q-Network (DQN) integrated with attention mechanisms (Transformers) to address the 3D H-P protein folding problem. Our system formulates folding decisions as a self-avoiding walk in a reinforced environment, and employs a specialized reward function based on favorable hydrophobic interactions. To improve performance, the method incorporates validity check including symmetry-breaking constraints, dueling and double Q-learning, and prioritized replay to focus learning on critical transitions. Experimental evaluations on standard benchmark sequences demonstrate that our approach achieves several known best solutions for shorter sequences, and obtains near-optimal results for longer chains. This study underscores the promise of attention-based reinforcement learning for protein folding, and created a prototype of Transformer-based Q-network structure for 3-dimensional lattice models.
Problem

Research questions and friction points this paper is trying to address.

Applying Transformer-based DQN to 3D HP protein folding
Optimizing folding via reward functions and validity checks
Achieving near-optimal results for long protein chains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-based DQN for 3D HP protein folding
Specialized reward function for hydrophobic interactions
Transformer-based Q-network with validity checks
🔎 Similar Papers
No similar papers found.
P
Peizheng Liu
Faculty of Engineering, The University of Tokyo
Hitoshi Iba
Hitoshi Iba
University of Tokyo
Artificial intelligenceEvolutionary sytemsComplex systems