🤖 AI Summary
The electronic design automation (EDA) community lacks a dedicated multimodal large language model (MLLM) evaluation benchmark tailored to circuit design. Method: This paper introduces MMCircuitEval, the first domain-specific multimodal benchmark for EDA, covering frontend and backend tasks across digital and analog circuits. It organizes evaluations hierarchically along four dimensions—design phase, circuit type, capability category (knowledge, understanding, reasoning, computation), and difficulty level—and curates high-quality data from textbooks, problem sets, datasheets, and real-world engineering documents, validated by domain experts. Contribution/Results: Experimental evaluation reveals significant performance bottlenecks of current MLLMs on backend design and complex computational tasks, underscoring the necessity of domain-adapted training data and modeling paradigms. MMCircuitEval establishes a foundational evaluation resource and technical roadmap for developing EDA-oriented MLLMs.
📝 Abstract
The emergence of multimodal large language models (MLLMs) presents promising opportunities for automation and enhancement in Electronic Design Automation (EDA). However, comprehensively evaluating these models in circuit design remains challenging due to the narrow scope of existing benchmarks. To bridge this gap, we introduce MMCircuitEval, the first multimodal benchmark specifically designed to assess MLLM performance comprehensively across diverse EDA tasks. MMCircuitEval comprises 3614 meticulously curated question-answer (QA) pairs spanning digital and analog circuits across critical EDA stages - ranging from general knowledge and specifications to front-end and back-end design. Derived from textbooks, technical question banks, datasheets, and real-world documentation, each QA pair undergoes rigorous expert review for accuracy and relevance. Our benchmark uniquely categorizes questions by design stage, circuit type, tested abilities (knowledge, comprehension, reasoning, computation), and difficulty level, enabling detailed analysis of model capabilities and limitations. Extensive evaluations reveal significant performance gaps among existing LLMs, particularly in back-end design and complex computations, highlighting the critical need for targeted training datasets and modeling approaches. MMCircuitEval provides a foundational resource for advancing MLLMs in EDA, facilitating their integration into real-world circuit design workflows. Our benchmark is available at https://github.com/cure-lab/MMCircuitEval.