🤖 AI Summary
To address the challenges of uncertain source-building selection and poor generalizability in transfer learning for thermal dynamic modeling of single-family dwellings in Central Europe, this paper proposes GenTL—a generic transfer learning framework. GenTL leverages multi-source time-series data from 450 buildings to perform large-scale supervised pretraining of an LSTM model, abandoning conventional single-source transfer paradigms and enabling plug-and-play fine-tuning without manual selection of “teacher” buildings. Experiments across 144 target buildings demonstrate that GenTL reduces average RMSE by 42.1% compared to single-source transfer methods, significantly improving cross-building prediction accuracy and robustness. Its core contribution lies in being the first to introduce generic pretraining into building thermal dynamic modeling, establishing a scalable, highly generalizable modeling paradigm for small-sample and heterogeneous building scenarios.
📝 Abstract
Transfer Learning (TL) is an emerging field in modeling building thermal dynamics. This method reduces the data required for a data-driven model of a target building by leveraging knowledge from a source building. Consequently, it enables the creation of data-efficient models that can be used for advanced control and fault detection&diagnosis. A major limitation of the TL approach is its inconsistent performance across different sources. Although accurate source-building selection for a target is crucial, it remains a persistent challenge. We present GenTL, a general transfer learning model for single-family houses in Central Europe. GenTL can be efficiently fine-tuned to a large variety of target buildings. It is pretrained on a Long Short-Term Memory (LSTM) network with data from 450 different buildings. The general transfer learning model eliminates the need for source-building selection by serving as a universal source for fine-tuning. Comparative analysis with conventional single-source to single-target TL demonstrates the efficacy and reliability of the general pretraining approach. Testing GenTL on 144 target buildings for fine-tuning reveals an average prediction error (RMSE) reduction of 42.1 % compared to fine-tuning single-source models.