π€ AI Summary
This work addresses the lack of effective evaluation benchmarks for assessing 3D spatial reasoning and environment construction capabilities in code generation models. To bridge this gap, we introduce VoxelCode, a novel platform, along with VoxelCodeBench, the first evaluation framework that jointly measures executability and spatial correctness in 3D code generation. Built upon Unreal Engineβs API, our framework executes generated code and models environments using voxelized representations. It employs a unified evaluation pipeline combining automated metrics and human assessment across three core reasoning dimensions: symbolic understanding, geometric construction, and artistic composition. Experiments reveal that while current models can produce syntactically executable code, they struggle significantly with geometric structures and multi-object spatial arrangements, highlighting fundamental challenges in 3D spatial reasoning. The platform is publicly released to foster community-driven extensions.
π Abstract
Evaluating code generation models for 3D spatial reasoning requires executing generated code in realistic environments and assessing outputs beyond surface-level correctness. We introduce a platform VoxelCode, for analyzing code generation capabilities for 3D understanding and environment creation. Our platform integrates natural language task specification, API-driven code execution in Unreal Engine, and a unified evaluation pipeline supporting both automated metrics and human assessment. To demonstrate its utility, we construct VoxelCodeBench, a benchmark of voxel manipulation tasks spanning three reasoning dimensions: symbolic interpretation, geometric construction, and artistic composition. Evaluating leading code generation models, we find that producing executable code is far easier than producing spatially correct outputs, with geometric construction and multi-object composition proving particularly challenging. By open-sourcing our platform and benchmark, we provide the community with extensible infrastructure for developing new 3D code generation benchmarks and probing spatial reasoning in future models.