RevaMp3D: Architecting the Processor Core and Cache Hierarchy for Systems with Monolithically-Integrated Logic and Memory

📅 2022-10-16
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Monolithic 3D (M3D) integration alleviates the traditional memory wall but shifts performance and energy-efficiency bottlenecks to processor cores and caches. This work first reveals that, under M3D architectures, eliminating the shared last-level cache (LLC) outperforms LLC capacity scaling or latency reduction. Building on this insight, we propose a microarchitectural reorganization: a wide-issue pipeline leveraging high-density inter-tier interconnects; a low-latency L1 cache design; instruction-level memoization—caching pre-fetched, decoded, and reorder-buffered results; dynamic pipeline unit power gating; and lightweight synchronization primitives. Evaluated against a baseline M3D system, our approach achieves 81% average performance improvement, 35% energy reduction, and 12.3% die area savings—enabling fine-grained co-optimization of computation, memory, and synchronization.
📝 Abstract
Recent nano-technological advances enable the Monolithic 3D (M3D) integration of multiple memory and logic layers in a single chip with fine-grained connections. M3D technology leads to significantly higher main memory bandwidth and shorter latency than existing 3D-stacked systems. We show for a variety of workloads on a state-of-the-art M3D system that the performance and energy bottlenecks shift from the main memory to the core and cache hierarchy. Hence, there is a need to revisit current core and cache designs that have been conventionally tailored to tackle the memory bottleneck. Our goal is to redesign the core and cache hierarchy, given the fundamentally new trade-offs of M3D, to benefit a wide range of workloads. To this end, we take two steps. First, we perform a design space exploration of the cache and core's key components. We highlight that in M3D systems, (i) removing the shared last-level cache leads to similar or larger performance benefits than increasing its size or reducing its latency; (ii) improving L1 latency has a large impact on improving performance; (iii) wider pipelines are increasingly beneficial; (iv) the performance impact of branch speculation and pipeline frontend increases; (v) the current synchronization schemes limit parallel speedup. Second, we propose an optimized M3D system, RevaMp3D, where (i) using the tight connectivity between logic layers, we efficiently increase pipeline width, reduce L1 latency, and enable fine-grained synchronization; (ii) using the high-bandwidth and energy-efficient main memory, we alleviate the amplified energy and speculation bottlenecks by memoizing the repetitive fetched, decoded, and reordered instructions and turning off the relevant parts of the core pipeline when possible. RevaMp3D provides, on average, 81% speedup, 35% energy reduction, and 12.3% smaller area compared to the baseline M3D system.
Problem

Research questions and friction points this paper is trying to address.

Redesign processor core for monolithic 3D integrated systems
Optimize cache hierarchy to address shifted performance bottlenecks
Enhance energy efficiency via M3D-specific architectural innovations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Remove shared last-level cache for speedups
Reduce L1 latency with M3D layout
Widen pipeline structures using M3D area
🔎 Similar Papers
No similar papers found.