Mozart: A Chiplet Ecosystem-Accelerator Codesign Framework for Composable Bespoke Application Specific Integrated Circuits

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional AI accelerators treat neural networks as homogeneous entities, ignoring operator-level heterogeneity and leading to suboptimal energy-efficiency–cost trade-offs; conversely, operator-specific customization incurs prohibitively high non-recurring engineering (NRE) costs. To address this, we propose a co-design framework integrating a chiplet-based “micro-core” ecosystem with accelerator architecture—enabling systematic, operator-level decoupling and reusable micro-core composition. Our approach unifies tensor fusion, heterogeneous memory optimization, constraint-aware parallel scheduling, and physical realizability verification. Using only eight carefully selected micro-cores, we achieve 43.5%–78.8% energy-efficiency improvement over baseline accelerators across large-model serving, speculative decoding, and autonomous-driving perception workloads. Concurrently, both energy–cost product and energy–latency product decrease significantly, effectively balancing customization, energy efficiency, and deployment flexibility.

Technology Category

Application Category

📝 Abstract
Modern AI acceleration faces a fundamental challenge: conventional assumptions about memory requirements, batching effectiveness, and latency-throughput tradeoffs are systemwide generalizations that ignore the heterogeneous computational patterns of individual neural network operators. However, going towards network-level customization and operator-level heterogeneity incur substantial Non-Recurring Engineering (NRE) costs. While chiplet-based approaches have been proposed to amortize NRE costs, reuse opportunities remain limited without carefully identifying which chiplets are truly necessary. This paper introduces Mozart, a chiplet ecosystem and accelerator codesign framework that systematically constructs low cost bespoke application-specific integrated circuits (BASICs). BASICs leverage operator-level disaggregation to explore chiplet and memory heterogeneity, tensor fusion, and tensor parallelism, with place-and-route validation ensuring physical implementability. The framework also enables constraint-aware system-level optimization across deployment contexts ranging from datacenter inference serving to edge computing in autonomous vehicles. The evaluation confirms that with just 8 strategically selected chiplets, Mozart-generated composite BASICs achieve 43.5%, 25.4%, 67.7%, and 78.8% reductions in energy, energy-cost product, energy-delay product (EDP), and energy-delay-cost product compared to traditional homogeneous accelerators. For datacenter LLM serving, Mozart achieves 15-19% energy reduction and 35-39% energy-cost improvement. In speculative decoding, Mozart delivers throughput improvements of 24.6-58.6% while reducing energy consumption by 38.6-45.6%. For autonomous vehicle perception, Mozart reduces energy-cost by 25.54% and energy by 10.53% under real-time constraints.
Problem

Research questions and friction points this paper is trying to address.

Addressing high NRE costs in custom AI accelerator design
Optimizing chiplet selection for heterogeneous neural network operators
Enabling energy-efficient ASICs across datacenter and edge deployments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chiplet ecosystem framework constructs bespoke application-specific circuits
Operator-level disaggregation enables chiplet and memory heterogeneity
Constraint-aware optimization across datacenter and edge deployments
🔎 Similar Papers
No similar papers found.
Haoran Jin
Haoran Jin
Computer Science & Engineering, University of Michigan, Ann Arbor, MI, USA
J
Jirong Yang
Computer Science & Engineering, University of Michigan, Ann Arbor, MI, USA
Yunpeng Liu
Yunpeng Liu
Wuhan University of Technology
cement and concrete materials
B
Barry Lyu
Electrical & Computer Engineering, University of Michigan, Ann Arbor, MI, USA
K
Kangqi Zhang
Computer Science & Engineering, University of Michigan, Ann Arbor, MI, USA
N
Nathaniel Bleier
Computer Science & Engineering, University of Michigan, Ann Arbor, MI, USA