🤖 AI Summary
Current evaluations of large language models predominantly focus on the correctness of code implementation, lacking a standardized benchmark for assessing software architecture design capabilities. To address this gap, this work proposes the first generative AI evaluation platform specifically tailored for software architecture tasks. The platform introduces a standardized dataset, supports recording of reasoning trajectories, and enables automated assessment. It features a novel extensible plugin architecture that accommodates both standalone models and tool-augmented coding agents. Integrated with a command-line interface and an interactive web-based leaderboard, the platform has been open-sourced to foster reproducible research and encourage community-driven development of a shared ecosystem for evaluating architectural intelligence in AI systems.
📝 Abstract
Benchmarks for large language models (LLMs) have progressed from snippet-level function generation to repository-level issue resolution, yet they overwhelmingly target implementation correctness. Software architecture tasks remain under-specified and difficult to compare across models, despite their central role in maintaining and evolving complex systems. We present ArchBench, the first unified platform for benchmarking LLM capabilities on software architecture tasks. ArchBench provides a command-line tool with a standardized pipeline for dataset download, inference with trajectory logging, and automated evaluation, alongside a public web interface with an interactive leaderboard. The platform is built around a plugin architecture where each task is a self-contained module, making it straightforward for the community to contribute new architectural tasks and evaluation results. We use the term LLMs broadly to encompass generative AI (GenAI) solutions for software engineering, including both standalone models and LLM-based coding agents equipped with tools. Both the CLI tool and the web platform are openly available to support reproducible research and community-driven growth of architectural benchmarking.