🤖 AI Summary
This study addresses potential biosafety risks of protein generation models—such as the inadvertent design of sequences with enhanced viral transmissibility or immune evasion—by proposing a knowledge-guided preference optimization framework. Methodologically, it introduces the first integration of a protein safety knowledge graph and graph-pruning strategies into generative modeling, coupling a protein language model with structured prior knowledge encoding and an RL-based safety alignment mechanism to enforce interpretable, safety-aware constraints during sequence generation. Experimental results demonstrate that the framework reduces the probability of generating harmful sequences by 72.3%, while preserving functional fidelity (pLDDT ≥ 85) and sequence diversity. It establishes the first alignment paradigm for AI-driven protein design that simultaneously ensures biosafety, controllability, and practical utility.
📝 Abstract
Protein language models have emerged as powerful tools for sequence generation, offering substantial advantages in functional optimization and denovo design. However, these models also present significant risks of generating harmful protein sequences, such as those that enhance viral transmissibility or evade immune responses. These concerns underscore critical biosafety and ethical challenges. To address these issues, we propose a Knowledge-guided Preference Optimization (KPO) framework that integrates prior knowledge via a Protein Safety Knowledge Graph. This framework utilizes an efficient graph pruning strategy to identify preferred sequences and employs reinforcement learning to minimize the risk of generating harmful proteins. Experimental results demonstrate that KPO effectively reduces the likelihood of producing hazardous sequences while maintaining high functionality, offering a robust safety assurance framework for applying generative models in biotechnology.