🤖 AI Summary
This study addresses the challenges of accuracy and stability in dynamic system parameter estimation caused by non-convexity and sensitivity to initial values. To this end, it proposes a transfer learning–based neural parameter estimation framework following a pretraining–fine-tuning paradigm, introducing transfer learning for the first time to parameter identification in building RC thermal models. The method eliminates the need for initial parameter guesses by pretraining a deep neural network across diverse RC configurations and varying data lengths, then fine-tuning it on the target system to establish a knowledge base capable of cross-system transfer. Experimental results demonstrate that, with only 12 days of training data, estimation errors are reduced by 18.6–24.0%, and with 72 days of data, improvements reach up to 49.4%, significantly outperforming both genetic algorithms and neural networks trained from scratch.
📝 Abstract
Parameter estimation for dynamical systems remains challenging due to non-convexity and sensitivity to initial parameter guesses. Recent deep learning approaches enable accurate and fast parameter estimation but do not exploit transferable knowledge across systems. To address this, we introduce a transfer-learning-based neural parameter estimation framework based on a pretraining-fine-tuning paradigm. This approach improves accuracy and eliminates the need for an initial parameter guess. We apply this framework to building RC thermal models, evaluating it against a Genetic Algorithm and a from-scratch neural baseline across eight simulated buildings, one real-world building, two RC model configurations, and four training data lengths. Results demonstrate an 18.6-24.0% performance improvement with only 12 days of training data and up to 49.4% with 72 days. Beyond buildings, the proposed method represents a new paradigm for parameter estimation in dynamical systems.