Magnitude-Modulated Equivariant Adapter for Parameter-Efficient Fine-Tuning of Equivariant Graph Neural Networks

๐Ÿ“… 2025-11-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of balancing SE(3) equivariance preservation and parameter efficiency when fine-tuning equivariant graph neural networks (EGNNs) on novel chemical domains, this paper proposes ScalarGatingโ€”a lightweight, scalar-gated equivariant fine-tuning method. Specifically, for spherical-harmonic-based EGNNs, ScalarGating introduces learnable scalar gates per tensor order and multiplicity, modulating only feature magnitudes while strictly preserving SE(3) equivariance and avoiding perturbation of pretrained feature distributions. As the first fully equivariant, magnitude-explicit, parameter-efficient fine-tuning paradigm, ScalarGating achieves state-of-the-art performance in energy and force prediction on QM9 and MD17 benchmarks. It reduces trainable parameters by over 40% compared to ELoRA, while delivering superior generalization and training efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Pretrained equivariant graph neural networks based on spherical harmonics offer efficient and accurate alternatives to computationally expensive ab-initio methods, yet adapting them to new tasks and chemical environments still requires fine-tuning. Conventional parameter-efficient fine-tuning (PEFT) techniques, such as Adapters and LoRA, typically break symmetry, making them incompatible with those equivariant architectures. ELoRA, recently proposed, is the first equivariant PEFT method. It achieves improved parameter efficiency and performance on many benchmarks. However, the relatively high degrees of freedom it retains within each tensor order can still perturb pretrained feature distributions and ultimately degrade performance. To address this, we present Magnitude-Modulated Equivariant Adapter (MMEA), a novel equivariant fine-tuning method which employs lightweight scalar gating to modulate feature magnitudes on a per-order and per-multiplicity basis. We demonstrate that MMEA preserves strict equivariance and, across multiple benchmarks, consistently improves energy and force predictions to state-of-the-art levels while training fewer parameters than competing approaches. These results suggest that, in many practical scenarios, modulating channel magnitudes is sufficient to adapt equivariant models to new chemical environments without breaking symmetry, pointing toward a new paradigm for equivariant PEFT design.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning equivariant graph neural networks breaks symmetry
Existing parameter-efficient methods degrade pretrained feature distributions
Modulating feature magnitudes preserves equivariance in chemical predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modulates feature magnitudes with scalar gating
Preserves strict equivariance in fine-tuning
Trains fewer parameters than competing approaches
๐Ÿ”Ž Similar Papers
No similar papers found.