Guide your favorite protein sequence generative model

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing protein generation models lack a plug-and-play, principled conditional guidance framework, making it difficult to flexibly incorporate experimental feedback or black-box classifiers (e.g., EC number predictors) for targeted design of sequences with desired properties—such as high thermostability or a specified CATH fold. To address this, we propose ProteinGuide: the first unified statistical conditional sampling framework compatible with four major protein generation paradigms—masked language modeling, autoregressive modeling, diffusion, and flow matching. Grounded in Bayesian inverse problem solving and probabilistic importance reweighting, ProteinGuide enables zero-shot conditional guidance of pretrained models (e.g., ProteinMPNN, ESM3) without fine-tuning. Experiments demonstrate substantial improvements in generated sequence thermostability and precise alignment with target CATH folds, validating its strong cross-architecture and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Generative machine learning models have begun to transform protein engineering, yet no principled framework for conditioning on auxiliary information in a plug-and-play manner exists; one may want to iteratively incorporate experimental feedback, or make use of an existing classifier -- such as for predicting enzyme commission number -- in order to guide the sampling of the generative model to generate sequences with desired properties. Herein, we present ProteinGuide, a rigorous and general framework to achieve just that: through unifying a broad class of protein generative models that includes masked language, (order-agnostic) autoregressive, diffusion and flow-matching models, we provide an approach to statistically condition pre-trained protein generative models. We demonstrate applicability of our approach by guiding each of two commonly used protein generative models, ProteinMPNN and ESM3, to generate amino acid and structure token sequences conditioned on several user-specified properties, namely, enhanced stability and CATH-labeled fold generation.
Problem

Research questions and friction points this paper is trying to address.

Lack of plug-and-play framework for conditioning protein generative models
Need to incorporate experimental feedback or classifiers for guided sampling
Generating protein sequences with desired properties like stability and fold
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for protein generative models
Plug-and-play conditioning on auxiliary information
Guided generation for desired protein properties
🔎 Similar Papers
No similar papers found.
J
Junhao Xiong
Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, Berkeley, 94720, CA, USA
H
Hunter M Nisonoff
Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, Berkeley, 94720, CA, USA
Ishan Gaur
Ishan Gaur
UC Berkeley
Computer SystemsMachine LearningArchitecture
Jennifer Listgarten
Jennifer Listgarten
Professor, UC Berkeley EECS and Center for Computational Biology
Machine LearningComputational BiologyProtein EngineeringDrug DiscoveryStatistical Genetics