Taming the Tail: NoI Topology Synthesis for Mixed DL Workloads on Chiplet-Based Accelerators

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In heterogeneous chiplet accelerators, data transfers between HBM/DRAM and compute units induce severe NoI (Network-on-Interposer) tail-latency spikes—especially during large-model inference—due to contention in k-dimensional torus topologies caused by frequent parameter/activation movement, leading to SLA violations. Method: This paper proposes a workload-aware NoI topology co-optimization framework. It introduces an Interference Score (IS) model to quantify worst-case performance degradation and formulates topology generation as a multi-objective optimization problem balancing throughput, tail latency, and power. Topology synthesis is automated via the PARL reinforcement learning framework. Contribution/Results: The synthesized topologies reduce the slowdown ratio to 1.2×, maintain high average throughput of mesh-level connectivity, and significantly alleviate memory-side contention—while strictly satisfying SLA constraints.

Technology Category

Application Category

📝 Abstract
Heterogeneous chiplet-based systems improve scaling by disag-gregating CPUs/GPUs and emerging technologies (HBM/DRAM).However this on-package disaggregation introduces a latency inNetwork-on-Interposer(NoI). We observe that in modern large-modelinference, parameters and activations routinely move backand forth from HBM/DRAM, injecting large, bursty flows into theinterposer. These memory-driven transfers inflate tail latency andviolate Service Level Agreements (SLAs) across k-ary n-cube base-line NoI topologies. To address this gap we introduce an InterferenceScore (IS) that quantifies worst-case slowdown under contention.We then formulate NoI synthesis as a multi-objective optimization(MOO) problem. We develop PARL (Partition-Aware ReinforcementLearner), a topology generator that balances throughput, latency,and power. PARL-generated topologies reduce contention at the memory cut, meet SLAs, and cut worst-case slowdown to 1.2 times while maintaining competitive mean throughput relative to link-rich meshes. Overall, this reframes NoI design for heterogeneouschiplet accelerators with workload-aware objectives.
Problem

Research questions and friction points this paper is trying to address.

Optimizing Network-on-Interposer topology for mixed deep learning workloads
Reducing tail latency caused by memory-driven transfers in chiplet systems
Balancing throughput, latency, and power with workload-aware NoI synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Interference Score to quantify contention slowdown
Formulates NoI synthesis as multi-objective optimization problem
Develops Partition-Aware Reinforcement Learner topology generator
🔎 Similar Papers
No similar papers found.
A
Arnav Shukla
Indraprastha Institute of Information Technology Delhi, New Delhi, India
H
Harsh Sharma
Washington State University, Pullman, Washington, USA
Srikant Bharadwaj
Srikant Bharadwaj
Microsoft Research, Redmond, Washington, USA
Vinayak Abrol
Vinayak Abrol
CSE Department & Infosys Centre for AI, IIIT Delhi, India
Speech/Audio ProcessingGenerative AITheories of Machine/Deep Learning
S
Sujay Deb
Indraprastha Institute of Information Technology Delhi, New Delhi, India