🤖 AI Summary
This study addresses the widespread lack of building-level thermal load data in urban areas, which hinders accurate heating demand mapping and decarbonization planning. It proposes a novel approach that leverages zero-shot large vision-language models (VLMs) to extract semantic features—such as roof age and building density—from satellite imagery using natural language prompts. These features are integrated with GIS and building metadata to train an MLP regressor for predicting annual heating demand, entirely without labeled thermal data. Evaluated in data-scarce regions, the method achieves substantially higher prediction accuracy than baseline models, improving R² by 93.7% and reducing MAE by 30%. Moreover, the high-impact semantic features identified by the model align closely with spatial patterns of elevated heating demand.
📝 Abstract
Accurate heat-demand maps play a crucial role in decarbonizing space heating, yet most municipalities lack detailed building-level data needed to calculate them. We introduce HeatPrompt, a zero-shot vision-language energy modeling framework that estimates annual heat demand using semantic features extracted from satellite images, basic Geographic Information System (GIS), and building-level features. We feed pretrained Large Vision Language Models (VLMs) with a domain-specific prompt to act as an energy planner and extract the visual attributes such as roof age, building density, etc, from the RGB satellite image that correspond to the thermal load. A Multi-Layer Perceptron (MLP) regressor trained on these captions shows an $R^2$ uplift of 93.7% and shrinks the mean absolute error (MAE) by 30% compared to the baseline model. Qualitative analysis shows that high-impact tokens align with high-demand zones, offering lightweight support for heat planning in data-scarce regions.