Evaluating protein binding interfaces with PUMBA

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing protein–protein docking scoring functions struggle to simultaneously capture long-range dependencies and local structural features, limiting their ability to discriminate native from non-native complexes. Method: We introduce Vision Mamba—a state-of-the-art vision architecture—to protein interface scoring for the first time, replacing conventional Vision Transformers. Our approach converts protein interfaces into 2D structural images and jointly models global contextual information and fine-grained geometric features in an end-to-end manner, integrating deep learning with structural biology priors. Contribution/Results: Evaluated on multiple mainstream benchmarks, our model significantly outperforms advanced methods such as PIsToN in both scoring accuracy and cross-dataset generalization. It achieves superior discrimination between native and decoy complexes, offering more reliable computational support for applications including drug design and vaccine development.

Technology Category

Application Category

📝 Abstract
Protein-protein docking tools help in studying interactions between proteins, and are essential for drug, vaccine, and therapeutic development. However, the accuracy of a docking tool depends on a robust scoring function that can reliably differentiate between native and non-native complexes. PIsToN is a state-of-the-art deep learning-based scoring function that uses Vision Transformers in its architecture. Recently, the Mamba architecture has demonstrated exceptional performance in both natural language processing and computer vision, often outperforming Transformer-based models in their domains. In this study, we introduce PUMBA (Protein-protein interface evaluation with Vision Mamba), which improves PIsToN by replacing its Vision Transformer backbone with Vision Mamba. This change allows us to leverage Mamba's efficient long-range sequence modeling for sequences of image patches. As a result, the model's ability to capture both global and local patterns in protein-protein interface features is significantly improved. Evaluation on several widely-used, large-scale public datasets demonstrates that PUMBA consistently outperforms its original Transformer-based predecessor, PIsToN.
Problem

Research questions and friction points this paper is trying to address.

Evaluating protein-protein binding interfaces using Vision Mamba architecture
Improving scoring function accuracy for native complex identification
Enhancing global and local pattern capture in protein interfaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaced Vision Transformer with Vision Mamba
Leveraged efficient long-range sequence modeling
Improved global and local pattern capture