🤖 AI Summary
To address the poor cross-source generalization of data-driven models in near-field acoustic holography (NAH), this paper proposes a physics-informed transfer learning framework. First, a complex-valued convolutional neural network (CV-CNN) is pre-trained on diverse acoustic source data; then, single-sample fine-tuning is performed under physical constraints derived from the Kirchhoff–Helmholtz integral equation. This enables rapid model adaptation to unseen source types without requiring extensive labeled data from the target source. In experiments on a violin soundboard, the fine-tuned model achieves significantly higher reconstruction accuracy than both the pre-trained CV-CNN and the compressed equivalent source method (C-ESM), reducing key modal error by up to 32%. The core innovation lies in deeply embedding acoustic physical priors into the transfer learning pipeline—thereby synergizing data-driven representation capability with acoustic interpretability—and substantially enhancing the cross-domain robustness and practical applicability of NAH models.
📝 Abstract
We propose a transfer learning framework for sound source reconstruction in Near-field Acoustic Holography (NAH), which adapts a well-trained data-driven model from one type of sound source to another using a physics-informed procedure. The framework comprises two stages: (1) supervised pre-training of a complex-valued convolutional neural network (CV-CNN) on a large dataset, and (2) purely physics-informed fine-tuning on a single data sample based on the Kirchhoff-Helmholtz integral. This method follows the principles of transfer learning by enabling generalization across different datasets through physics-informed adaptation. The effectiveness of the approach is validated by transferring a pre-trained model from a rectangular plate dataset to a violin top plate dataset, where it shows improved reconstruction accuracy compared to the pre-trained model and delivers performance comparable to that of Compressive-Equivalent Source Method (C-ESM). Furthermore, for successful modes, the fine-tuned model outperforms both the pre-trained model and C-ESM in accuracy.