DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil Engineering

πŸ“… 2025-07-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the lack of industrial-scenario-oriented automated agent evaluation benchmarks in civil engineering, this paper introduces the first standardized benchmark specifically for technical drawing revision tasks. It encompasses 12 real-world scenarios, integrates 46 domain-specific tools, and provides 1,920 rigorously curated test instances. The benchmark systematically evaluates large language model–based agents across key dimensions: complex instruction comprehension, structured data processing, function calling fidelity, and dynamic adaptability. We propose an implicit strategy awareness mechanism to quantify agent robustness against instruction quality fluctuations, and provide a fine-grained error attribution and capability diagnosis framework. Empirical evaluation exposes critical performance bottlenecks of state-of-the-art agents in engineering tasks, delivering a reproducible benchmark, concrete improvement pathways, and empirical foundations for optimizing industrial-grade automation systems.

Technology Category

Application Category

πŸ“ Abstract
Large Language Model (LLM) agents have shown great potential for solving real-world problems and promise to be a solution for tasks automation in industry. However, more benchmarks are needed to systematically evaluate automation agents from an industrial perspective, for example, in Civil Engineering. Therefore, we propose DrafterBench for the comprehensive evaluation of LLM agents in the context of technical drawing revision, a representation task in civil engineering. DrafterBench contains twelve types of tasks summarized from real-world drawing files, with 46 customized functions/tools and 1920 tasks in total. DrafterBench is an open-source benchmark to rigorously test AI agents' proficiency in interpreting intricate and long-context instructions, leveraging prior knowledge, and adapting to dynamic instruction quality via implicit policy awareness. The toolkit comprehensively assesses distinct capabilities in structured data comprehension, function execution, instruction following, and critical reasoning. DrafterBench offers detailed analysis of task accuracy and error statistics, aiming to provide deeper insight into agent capabilities and identify improvement targets for integrating LLMs in engineering applications. Our benchmark is available at https://github.com/Eason-Li-AIS/DrafterBench, with the test set hosted at https://huggingface.co/datasets/Eason666/DrafterBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLM agents for civil engineering task automation
Assess AI proficiency in technical drawing revision tasks
Benchmark structured data comprehension and function execution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source benchmark for LLM agents evaluation
Customized functions for technical drawing revision
Comprehensive assessment of structured data comprehension
πŸ”Ž Similar Papers
No similar papers found.
Y
Yinsheng Li
Department of Civil Engineering, McGill University
Z
Zhen Dong
UC Santa Barbara and NVIDIA
Yi Shao
Yi Shao
Assistant Professor, McGill University
UHPCRobotic ConstructionStructural Optimization