Predictive Feature Caching for Training-free Acceleration of Molecular Geometry Generation

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In molecular geometry generation, flow-matching models require hundreds of network evaluations during inference, severely limiting large-scale sampling efficiency. To address this, we propose a training-free intermediate latent-state caching strategy—the first such approach enabling model-agnostic feature reuse on SE(3)-equivariant backbone networks. By predicting latent states between successive steps of the ODE solver, our method skips redundant forward passes without modifying the model or training procedure. Orthogonal to existing acceleration techniques, it is plug-and-play, compatible with any pre-trained flow-matching model, and incurs zero additional training cost. On the GEOM-Drugs dataset, it reduces inference time by up to 50% and achieves speedups of up to 7×, while preserving generation quality—as measured by FCD, COV, and other standard metrics—with negligible degradation. Our core contribution is the first training-free caching mechanism tailored for SE(3)-equivariant flow matching, significantly alleviating the inference bottleneck in 3D molecular geometry generation.

Technology Category

Application Category

📝 Abstract
Flow matching models generate high-fidelity molecular geometries but incur significant computational costs during inference, requiring hundreds of network evaluations. This inference overhead becomes the primary bottleneck when such models are employed in practice to sample large numbers of molecular candidates. This work discusses a training-free caching strategy that accelerates molecular geometry generation by predicting intermediate hidden states across solver steps. The proposed method operates directly on the SE(3)-equivariant backbone, is compatible with pretrained models, and is orthogonal to existing training-based accelerations and system-level optimizations. Experiments on the GEOM-Drugs dataset demonstrate that caching achieves a twofold reduction in wall-clock inference time at matched sample quality and a speedup of up to 3x compared to the base model with minimal sample quality degradation. Because these gains compound with other optimizations, applying caching alongside other general, lossless optimizations yield as much as a 7x speedup.
Problem

Research questions and friction points this paper is trying to address.

Accelerating molecular geometry generation by reducing computational costs
Predicting intermediate states to minimize network evaluations during inference
Maintaining sample quality while achieving faster inference speeds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free caching strategy accelerates molecular geometry generation
Predicts intermediate hidden states across solver steps
Operates directly on SE(3)-equivariant backbone of models
🔎 Similar Papers
No similar papers found.