🤖 AI Summary
In heterogeneous chiplet accelerators, data transfers between HBM/DRAM and compute units induce severe NoI (Network-on-Interposer) tail-latency spikes—especially during large-model inference—due to contention in k-dimensional torus topologies caused by frequent parameter/activation movement, leading to SLA violations.
Method: This paper proposes a workload-aware NoI topology co-optimization framework. It introduces an Interference Score (IS) model to quantify worst-case performance degradation and formulates topology generation as a multi-objective optimization problem balancing throughput, tail latency, and power. Topology synthesis is automated via the PARL reinforcement learning framework.
Contribution/Results: The synthesized topologies reduce the slowdown ratio to 1.2×, maintain high average throughput of mesh-level connectivity, and significantly alleviate memory-side contention—while strictly satisfying SLA constraints.
📝 Abstract
Heterogeneous chiplet-based systems improve scaling by disag-gregating CPUs/GPUs and emerging technologies (HBM/DRAM).However this on-package disaggregation introduces a latency inNetwork-on-Interposer(NoI). We observe that in modern large-modelinference, parameters and activations routinely move backand forth from HBM/DRAM, injecting large, bursty flows into theinterposer. These memory-driven transfers inflate tail latency andviolate Service Level Agreements (SLAs) across k-ary n-cube base-line NoI topologies. To address this gap we introduce an InterferenceScore (IS) that quantifies worst-case slowdown under contention.We then formulate NoI synthesis as a multi-objective optimization(MOO) problem. We develop PARL (Partition-Aware ReinforcementLearner), a topology generator that balances throughput, latency,and power. PARL-generated topologies reduce contention at the memory cut, meet SLAs, and cut worst-case slowdown to 1.2 times while maintaining competitive mean throughput relative to link-rich meshes. Overall, this reframes NoI design for heterogeneouschiplet accelerators with workload-aware objectives.