๐ค AI Summary
To address the challenge of balancing SE(3) equivariance preservation and parameter efficiency when fine-tuning equivariant graph neural networks (EGNNs) on novel chemical domains, this paper proposes ScalarGatingโa lightweight, scalar-gated equivariant fine-tuning method. Specifically, for spherical-harmonic-based EGNNs, ScalarGating introduces learnable scalar gates per tensor order and multiplicity, modulating only feature magnitudes while strictly preserving SE(3) equivariance and avoiding perturbation of pretrained feature distributions. As the first fully equivariant, magnitude-explicit, parameter-efficient fine-tuning paradigm, ScalarGating achieves state-of-the-art performance in energy and force prediction on QM9 and MD17 benchmarks. It reduces trainable parameters by over 40% compared to ELoRA, while delivering superior generalization and training efficiency.
๐ Abstract
Pretrained equivariant graph neural networks based on spherical harmonics offer efficient and accurate alternatives to computationally expensive ab-initio methods, yet adapting them to new tasks and chemical environments still requires fine-tuning. Conventional parameter-efficient fine-tuning (PEFT) techniques, such as Adapters and LoRA, typically break symmetry, making them incompatible with those equivariant architectures. ELoRA, recently proposed, is the first equivariant PEFT method. It achieves improved parameter efficiency and performance on many benchmarks. However, the relatively high degrees of freedom it retains within each tensor order can still perturb pretrained feature distributions and ultimately degrade performance. To address this, we present Magnitude-Modulated Equivariant Adapter (MMEA), a novel equivariant fine-tuning method which employs lightweight scalar gating to modulate feature magnitudes on a per-order and per-multiplicity basis. We demonstrate that MMEA preserves strict equivariance and, across multiple benchmarks, consistently improves energy and force predictions to state-of-the-art levels while training fewer parameters than competing approaches. These results suggest that, in many practical scenarios, modulating channel magnitudes is sufficient to adapt equivariant models to new chemical environments without breaking symmetry, pointing toward a new paradigm for equivariant PEFT design.