🤖 AI Summary
Physics-informed neural networks (PINNs) suffer from poor generalization, requiring full retraining when boundary conditions, material parameters, or geometric configurations change. Method: This work systematically investigates transfer learning for both strong-form and energy-based PINNs, introducing efficient fine-tuning techniques—particularly Low-Rank Adaptation (LoRA)—to enable parameter transfer across diverse physical scenarios. We propose a unified adaptation framework integrating full-parameter fine-tuning, lightweight fine-tuning, and LoRA, designed to support both mainstream PINN formulations. Contribution/Results: Experiments demonstrate that our approach accelerates convergence by an average factor of 2.1× and improves solution accuracy, reducing relative error by 5.3%. By enabling cross-scenario knowledge reuse, the method overcomes the task-specific limitation of conventional PINNs and establishes a new paradigm for building reusable, adaptive, physics-driven AI solvers.
📝 Abstract
AI for PDEs has garnered significant attention, particularly Physics-Informed Neural Networks (PINNs). However, PINNs are typically limited to solving specific problems, and any changes in problem conditions necessitate retraining. Therefore, we explore the generalization capability of transfer learning in the strong and energy form of PINNs across different boundary conditions, materials, and geometries. The transfer learning methods we employ include full finetuning, lightweight finetuning, and Low-Rank Adaptation (LoRA). The results demonstrate that full finetuning and LoRA can significantly improve convergence speed while providing a slight enhancement in accuracy.