🤖 AI Summary
Current LLM evaluation frameworks critically lack coverage of computer architecture—the pivotal interdisciplinary domain bridging software and hardware. To address this gap, we introduce QuArch, the first domain-specific benchmark for evaluating LLMs on computer architecture. It comprises 2,671 expert-curated and validated question-answer pairs spanning core areas including processor design, memory systems, and interconnection networks, and supports fine-grained assessment across four task categories: knowledge comprehension, analysis, design, and implementation. QuArch is the first to systematically expose substantial performance disparities among state-of-the-art LLMs on architecture reasoning tasks, with accuracy ranging narrowly from 34% to 72%, and particularly poor performance on higher-order design and implementation questions. By filling a critical void in hardware-software co-intelligent evaluation, QuArch provides a reproducible, scalable, and scientifically rigorous foundation for diagnosing LLM capabilities and guiding model improvement in computer architecture.
📝 Abstract
The field of computer architecture, which bridges high-level software abstractions and low-level hardware implementations, remains absent from current large language model (LLM) evaluations. To this end, we present QuArch (pronounced 'quark'), the first benchmark designed to facilitate the development and evaluation of LLM knowledge and reasoning capabilities specifically in computer architecture. QuArch provides a comprehensive collection of 2,671 expert-validated question-answer (QA) pairs covering various aspects of computer architecture, including processor design, memory systems, and interconnection networks. Our evaluation reveals that while frontier models possess domain-specific knowledge, they struggle with skills that require higher-order thinking in computer architecture. Frontier model accuracies vary widely (from 34% to 72%) on these advanced questions, highlighting persistent gaps in architectural reasoning across analysis, design, and implementation QAs. By holistically assessing fundamental skills, QuArch provides a foundation for building and measuring LLM capabilities that can accelerate innovation in computing systems. With over 140 contributors from 40 institutions, this benchmark represents a community effort to set the standard for architectural reasoning in LLM evaluation.