Preserving Continuous Symmetry in Discrete Spaces: Geometric-Aware Quantization for SO(3)-Equivariant GNNs

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation of SO(3) equivariance in low-bit quantized graph neural networks, which compromises physical consistency and violates conservation laws due to disrupted continuous rotational symmetry. To resolve this, the authors propose a geometry-aware quantization framework that preserves SO(3) equivariance during compression through magnitude-direction decoupled quantization (MDDQ), differentiated scheduling for scalar and vector features, and low-bit gradient stabilization. Integrated with symmetry-aware training and robust attention normalization, the method achieves near-FP32 accuracy on the QM9 benchmark with a W4A8 model (9.31 meV MAE), reduces local equivariance error by over 30×, accelerates inference by 2.39×, and cuts memory usage by 4× compared to full-precision counterparts.

Technology Category

Application Category

📝 Abstract
Equivariant Graph Neural Networks (GNNs) are essential for physically consistent molecular simulations but suffer from high computational costs and memory bottlenecks, especially with high-order representations. While low-bit quantization offers a solution, applying it naively to rotation-sensitive features destroys the SO(3)-equivariant structure, leading to significant errors and violations of conservation laws. To address this issue, in this work, we propose a Geometric-Aware Quantization (GAQ) framework that compresses and accelerates equivariant models while rigorously preserving continuous symmetry in discrete spaces. Our approach introduces three key contributions: (1) a Magnitude-Direction Decoupled Quantization (MDDQ) scheme that separates invariant lengths from equivariant orientations to maintain geometric fidelity; (2) a symmetry-aware training strategy that treats scalar and vector features with distinct quantization schedules; and (3) a robust attention normalization mechanism to stabilize gradients in low-bit regimes. Experiments on the rMD17 benchmark demonstrate that our W4A8 models match the accuracy of FP32 baselines (9.31 meV vs. 23.20 meV) while reducing Local Equivariance Error (LEE) by over 30x compared to naive quantization. On consumer hardware, GAQ achieves 2.39x inference speedup and 4x memory reduction, enabling stable, energy-conserving molecular dynamics simulations for nanosecond timescales.
Problem

Research questions and friction points this paper is trying to address.

SO(3)-equivariance
quantization
symmetry preservation
geometric fidelity
molecular simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometric-Aware Quantization
SO(3)-Equivariance
Magnitude-Direction Decoupled Quantization
Low-bit Quantization
Equivariant GNNs