BIGNet: Pretrained Graph Neural Network for Embedding Semantic, Spatial, and Topological Data in BIM Models

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large foundation models (LFMs) struggle to effectively capture the semantic, spatial, and topological multimodal features inherent in Building Information Modeling (BIM). Method: This paper introduces BIGNet—the first large-scale graph neural network tailored for BIM—built upon a million-node homogeneous BIM graph dataset. We enhance GraphMAE2 with a local message-passing mechanism based on 30-cm spatial neighborhoods, incorporate node-masking self-supervised pretraining, and integrate Graph Attention Networks (GATs) to improve transferability. Contribution/Results: BIGNet pioneers unified representation learning and cross-task reuse of multimodal BIM design knowledge within a graph-structured framework. Experiments on BIM design review tasks demonstrate that BIGNet achieves a 72.7% average F1-score improvement over non-pretrained baselines, significantly advancing automated understanding and reuse of civil engineering design knowledge.

Technology Category

Application Category

📝 Abstract
Large Foundation Models (LFMs) have demonstrated significant advantages in civil engineering, but they primarily focus on textual and visual data, overlooking the rich semantic, spatial, and topological features in BIM (Building Information Modelling) models. Therefore, this study develops the first large-scale graph neural network (GNN), BIGNet, to learn, and reuse multidimensional design features embedded in BIM models. Firstly, a scalable graph representation is introduced to encode the "semantic-spatial-topological" features of BIM components, and a dataset with nearly 1 million nodes and 3.5 million edges is created. Subsequently, BIGNet is proposed by introducing a new message-passing mechanism to GraphMAE2 and further pretrained with a node masking strategy. Finally, BIGNet is evaluated in various transfer learning tasks for BIM-based design checking. Results show that: 1) homogeneous graph representation outperforms heterogeneous graph in learning design features, 2) considering local spatial relationships in a 30 cm radius enhances performance, and 3) BIGNet with GAT (Graph Attention Network)-based feature extraction achieves the best transfer learning results. This innovation leads to a 72.7% improvement in Average F1-score over non-pretrained models, demonstrating its effectiveness in learning and transferring BIM design features and facilitating their automated application in future design and lifecycle management.
Problem

Research questions and friction points this paper is trying to address.

Develops BIGNet GNN to embed BIM semantic, spatial, topological features
Creates scalable graph representation with 1M nodes and 3.5M edges
Evaluates transfer learning for BIM-based design checking tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretrained GNN for BIM semantic-spatial-topological embedding
New message-passing mechanism with node masking strategy
GAT-based feature extraction for transfer learning tasks
🔎 Similar Papers
No similar papers found.
Jin Han
Jin Han
The University of Tokyo, National Institute of Informatics
computer vision
X
Xin-Zheng Lu
Department of Civil Engineering, Tsinghua University, China
J
Jia-Rui Lin
Department of Civil Engineering, Tsinghua University, China