Parameter-Efficient Domain Adaptation of Physics-Informed Self-Attention based GNNs for AC Power Flow Prediction

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of inaccurate AC power flow prediction during domain shifts encountered when transferring models from medium-voltage to high-voltage power grids. To this end, the authors propose a parameter-efficient domain adaptation method that integrates Low-Rank Adaptation (LoRA) into the self-attention projections of a graph neural network. The approach incorporates a physics-informed loss to enforce consistency with Kirchhoff’s laws and selectively fine-tunes only the prediction head. Evaluated across multiple grid topologies, the method achieves near full-fine-tuning accuracy—exhibiting a target-domain RMSE gap of merely 2.6×10⁻⁴—while using only 14.54% of trainable parameters. It maintains comparable physical residuals, incurs just a 4.7 percentage point drop in source-domain performance, and substantially reduces computational overhead while mitigating catastrophic forgetting.

Technology Category

Application Category

📝 Abstract
Accurate AC-PF prediction under domain shift is critical when models trained on medium-voltage (MV) grids are deployed on high-voltage (HV) networks. Existing physics-informed graph neural solvers typically rely on full fine-tuning for cross-regime transfer, incurring high retraining cost and offering limited control over the stability-plasticity trade-off between target-domain adaptation and source-domain retention. We study parameter-efficient domain adaptation for physics-informed self-attention based GNN, encouraging Kirchhoff-consistent behavior via a physics-based loss while restricting adaptation to low-rank updates. Specifically, we apply LoRA to attention projections with selective unfreezing of the prediction head to regulate adaptation capacity. This design yields a controllable efficiency-accuracy trade-off for physics-constrained inverse estimation under voltage-regime shift. Across multiple grid topologies, the proposed LoRA+PHead adaptation recovers near-full fine-tuning accuracy with a target-domain RMSE gap of $2.6\times10^{-4}$ while reducing the number of trainable parameters by 85.46%. The physics-based residual remains comparable to full fine-tuning; however, relative to Full FT, LoRA+PHead reduces MV source retention by 4.7 percentage points (17.9% vs. 22.6%) under domain shift, while still enabling parameter-efficient and physically consistent AC-PF estimation.
Problem

Research questions and friction points this paper is trying to address.

domain adaptation
AC power flow prediction
physics-informed GNN
parameter efficiency
voltage-regime shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-Efficient Adaptation
Physics-Informed GNN
LoRA
AC Power Flow Prediction
Domain Shift
🔎 Similar Papers
No similar papers found.
R
Redwanul Karim
Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
C
Changhun Kim
Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
T
Timon Conrad
Institute of Electrical Energy Systems, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
Nora Gourmelon
Nora Gourmelon
Friedrich-Alexander-Universität
Deep LearningClimate ChangeSustainabilityMachine Learning
J
Julian Oelhaf
Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
D
David Riebesel
Institute of Electrical Energy Systems, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
T
Tomás Arias-Vergara
Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
A
Andreas Maier
Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
J
Johann Jäger
Institute of Electrical Energy Systems, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
Siming Bayer
Siming Bayer
Researcher, Pattern Recognition Lab, Friedrich-Alexander University
Medical Image ProcessingComputer Guided InterventionMachine Learning