Quantized SO(3)-Equivariant Graph Neural Networks for Efficient Molecular Property Prediction

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying SO(3)-equivariant 3D graph neural networks on edge devices is hindered by their high computational overhead. This work proposes a low-bit quantization method that preserves equivariance while significantly compressing the model and accelerating inference. By introducing magnitude–direction decoupled quantization, branch-separated quantization-aware training, and an attention normalization mechanism, the approach effectively mitigates quantization-induced equivariance errors. Integrating an SO(3)-equivariant Transformer with a dedicated equivariance error evaluation metric (LEE), the method achieves 8-bit models whose accuracy matches that of full-precision counterparts on the QM9 and rMD17 datasets. Moreover, it yields a 2.37–2.73× speedup in inference latency and reduces model size by a factor of four, demonstrating strong practicality for resource-constrained deployment.

Technology Category

Application Category

📝 Abstract
Deploying 3D graph neural networks (GNNs) that are equivariant to 3D rotations (the group SO(3)) on edge devices is challenging due to their high computational cost. This paper addresses the problem by compressing and accelerating an SO(3)-equivariant GNN using low-bit quantization techniques. Specifically, we introduce three innovations for quantized equivariant transformers: (1) a magnitude-direction decoupled quantization scheme that separately quantizes the norm and orientation of equivariant (vector) features, (2) a branch-separated quantization-aware training strategy that treats invariant and equivariant feature channels differently in an attention-based $SO(3)$-GNN, and (3) a robustness-enhancing attention normalization mechanism that stabilizes low-precision attention computations. Experiments on the QM9 and rMD17 molecular benchmarks demonstrate that our 8-bit models achieve accuracy on energy and force predictions comparable to full-precision baselines with markedly improved efficiency. We also conduct ablation studies to quantify the contribution of each component to maintain accuracy and equivariance under quantization, using the Local error of equivariance (LEE) metric. The proposed techniques enable the deployment of symmetry-aware GNNs in practical chemistry applications with 2.37--2.73x faster inference and 4x smaller model size, without sacrificing accuracy or physical symmetry.
Problem

Research questions and friction points this paper is trying to address.

SO(3)-equivariant
graph neural networks
quantization
molecular property prediction
edge devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantization
SO(3)-equivariance
graph neural networks
molecular property prediction
quantization-aware training
🔎 Similar Papers
No similar papers found.