DeepStack: Scalable and Accurate Design Space Exploration for Distributed 3D-Stacked AI Accelerators

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently exploring the hardware-software co-design space for distributed inference of large language models on 3D-stacked AI accelerators. The authors propose DeepStack, a high-performance modeling and search framework enabling early-stage co-optimization through a novel two-phase network abstraction and die-level computation-communication overlap mechanism. Integrated with fine-grained 3D memory modeling—capturing transaction-aware bandwidth, bank constraints, buffering, and thermal power—and combined with distributed parallelism strategies and a hierarchical search algorithm, DeepStack efficiently navigates a design space of 2.5×10¹⁴ configurations, achieving up to 10⁵× speedup over existing simulators at comparable accuracy. It discovers an optimized solution delivering 9.5× higher throughput, validated on an 8×B200 GPU system. A key insight is that batch size exerts a far greater influence on architectural choices than differences between prefill and decode phases.
📝 Abstract
Advances in hybrid bonding and packaging have driven growing interest in 3D DRAM-stacked accelerators with higher memory bandwidth and capacity. As LLMs scale to hundreds of billions or trillions of parameters, distributed inference across multiple 3D chips becomes essential. With cross-stack co-design increasingly critical, we propose DeepStack, an accurate and efficient performance model and tool to enable early-stage system-hardware co-design space exploration (DSE) for distributed 3D-stacked AI systems. At the hardware level, DeepStack captures fine-grained 3D memory semantics such as transaction-aware bandwidth, bank activation constraints, buffering limitations, and thermal-power modeling. At the system level, DeepStack incorporates comprehensive parallelization strategies and execution scheduling for distributed LLM inference. With novel modeling techniques such as dual-stage network abstraction and tile-level compute-communication overlap, we achieve up to 100,000x faster runtime over state-of-the-art simulators at comparable accuracy, cross-validated against our in-house 3D designs, NS-3 backend (2.12%), and vLLM serving on 8xB200 GPUs (12.18%). With hierarchical design space search, DeepStack enables efficient exploration over 2.5x10^14 design points spanning 3D-stacked DRAM layers, DRAM vertical connectivity, interconnect, compute-memory allocation, and distributed scheduling. Compared with baseline designs, DeepStack achieves up to 9.5x higher throughput through co-optimized parallelism and 3D architecture search. Our DSE further reveals that batch size drives a more fundamental architectural divide than the prefill/decode distinction, and that parallelism strategy and hardware architecture are tightly coupled -- incomplete schedule search leads to permanently suboptimal silicon irrecoverable by software tuning. We intend to open source DeepStack to support future research.
Problem

Research questions and friction points this paper is trying to address.

distributed inference
3D-stacked accelerators
design space exploration
LLM
system-hardware co-design
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D-stacked accelerators
design space exploration
distributed LLM inference
performance modeling
compute-communication overlap
🔎 Similar Papers
No similar papers found.