🤖 AI Summary
This work addresses the limitation of existing enzyme kinetic parameter prediction methods, which often oversimplify the problem as a static enzyme–substrate matching task and neglect the staged dynamic nature of substrate recognition and conformational adaptation during catalysis. To overcome this, we propose a staged multimodal conditional modeling framework featuring an Enzyme–Reaction Bridging Adapter (ERBA) that integrates cross-modal information while preserving the biochemical priors encoded in protein language models (PLMs). Our approach combines Molecular Recognition Cross-Attention (MRCA), Geometry-aware Mixture-of-Experts (G-MoE), and distribution alignment in a Reproducing Kernel Hilbert Space (ESDA) to maintain semantic consistency and enhance geometric and functional modeling. The method consistently outperforms unimodal and shallow fusion baselines across multiple kinetic endpoints and PLM backbones, demonstrating particularly strong performance in out-of-distribution generalization scenarios.
📝 Abstract
Predicting enzyme kinetic parameters quantifies how efficiently an enzyme catalyzes a specific substrate under defined biochemical conditions. Canonical parameters such as the turnover number ($k_\text{cat}$), Michaelis constant ($K_\text{m}$), and inhibition constant ($K_\text{i}$) depend jointly on the enzyme sequence, the substrate chemistry, and the conformational adaptation of the active site during binding. Many learning pipelines simplify this process to a static compatibility problem between the enzyme and substrate, fusing their representations through shallow operations and regressing a single value. Such formulations overlook the staged nature of catalysis, which involves both substrate recognition and conformational adaptation. In this regard, we reformulate kinetic prediction as a staged multimodal conditional modeling problem and introduce the Enzyme-Reaction Bridging Adapter (ERBA), which injects cross-modal information via fine-tuning into Protein Language Models (PLMs) while preserving their biochemical priors. ERBA performs conditioning in two stages: Molecular Recognition Cross-Attention (MRCA) first injects substrate information into the enzyme representation to capture specificity; Geometry-aware Mixture-of-Experts (G-MoE) then integrates active-site structure and routes samples to pocket-specialized experts to reflect induced fit. To maintain semantic fidelity, Enzyme-Substrate Distribution Alignment (ESDA) enforces distributional consistency within the PLM manifold in a reproducing kernel Hilbert space. Experiments across three kinetic endpoints and multiple PLM backbones, ERBA delivers consistent gains and stronger out-of-distribution performance compared with sequence-only and shallow-fusion baselines, offering a biologically grounded route to scalable kinetic prediction and a foundation for adding cofactors, mutations, and time-resolved structural cues.