π€ AI Summary
This study addresses the longstanding reliance on designer intuition in architectural massing generation by proposing a data-driven, conditional generative framework that leverages functional requirements and site context. Formulating massing design as a vision-language modeling (VLM) task, the authors introduce CoMa-20Kβthe first multimodal dataset encompassing geometric, functional, economic attributes of building masses alongside visual site context. The framework is trained and evaluated using both fine-tuned and zero-shot VLM approaches. Experimental results demonstrate that the model effectively generates contextually appropriate and functionally plausible massing proposals, thereby validating the feasibility of this methodology and establishing a new benchmark for data-driven architectural design.
π Abstract
The conceptual design phase in architecture and urban planning, particularly building massing, is complex and heavily reliant on designer intuition and manual effort. To address this, we propose an automated framework for generating building massing based on functional requirements and site context. A primary obstacle to such data-driven methods has been the lack of suitable datasets. Consequently, we introduce the CoMa-20K dataset, a comprehensive collection that includes detailed massing geometries, associated economical and programmatic data, and visual representations of the development site within its existing urban context. We benchmark this dataset by formulating massing generation as a conditional task for Vision-Language Models (VLMs), evaluating both fine-tuned and large zero-shot models. Our experiments reveal the inherent complexity of the task while demonstrating the potential of VLMs to produce context-sensitive massing options. The dataset and analysis establish a foundational benchmark and highlight significant opportunities for future research in data-driven architectural design.