🤖 AI Summary
Deep generative sequence models lack attribution methods that rely solely on positive data (e.g., high-affinity antibodies), limiting their interpretability and trustworthy design. To address this, we propose GAMA—the first attribution framework tailored for autoregressive generative models—based on Integrated Gradients. GAMA quantifies the contribution of individual sequence positions to generation decisions without requiring negative samples or additional annotations. Its statistical robustness is validated on synthetic data, and it accurately recovers known functional residues in real antibody–antigen binding datasets. By eliminating reliance on balanced, labeled training data—a key limitation of conventional interpretation methods—GAMA enables interpretable and reproducible attribution for therapeutic biosequence design and validation. This advancement significantly enhances the credibility and practical utility of generative models in precision medicine.
📝 Abstract
Generative machine learning models offer a powerful framework for therapeutic design by efficiently exploring large spaces of biological sequences enriched for desirable properties. Unlike supervised learning methods, which require both positive and negative labeled data, generative models such as LSTMs can be trained solely on positively labeled sequences, for example, high-affinity antibodies. This is particularly advantageous in biological settings where negative data are scarce, unreliable, or biologically ill-defined. However, the lack of attribution methods for generative models has hindered the ability to extract interpretable biological insights from such models. To address this gap, we developed Generative Attribution Metric Analysis (GAMA), an attribution method for autoregressive generative models based on Integrated Gradients. We assessed GAMA using synthetic datasets with known ground truths to characterize its statistical behavior and validate its ability to recover biologically relevant features. We further demonstrated the utility of GAMA by applying it to experimental antibody-antigen binding data. GAMA enables model interpretability and the validation of generative sequence design strategies without the need for negative training data.