π€ AI Summary
To address high memory latency in data-intensive applications running on memory-disaggregated architectures, this paper proposes a hardware-software co-designed, memory-centric coroutine system. Our approach decouples memory operations from coroutine scheduling by introducing coroutine-specific memory instructions and memory-directed branch prediction. It further integrates compiler-level request coalescing with lightweight context switching to enable dynamic scheduling optimization. The system is implemented on an LLVM-based toolchain, the RISC-V XiangShan processor, and an FPGA platform, incorporating asynchronous memory units and hardware support for decoupled memory access. Experimental results show that the pure-software variant achieves a 1.51Γ speedup over state-of-the-art coroutine methods on an Intel server. In FPGA-emulated disaggregated systems, the full hardware-software co-design delivers average performance improvements of 3.39Γ and 4.87Γ under memory latencies of 200 ns and 800 ns, respectively.
π Abstract
Modern data-intensive applications face memory latency challenges exacerbated by disaggregated memory systems. Recent work shows that coroutines are promising in effectively interleaving tasks and hiding memory latency, but they struggle to balance latency-hiding efficiency with runtime overhead. We present CoroAMU, a hardware-software co-designed system for memory-centric coroutines. It introduces compiler procedures that optimize coroutine code generation, minimize context, and coalesce requests, paired with a simple interface. With hardware support of decoupled memory operations, we enhance the Asynchronous Memory Unit to further exploit dynamic coroutine schedulers by coroutine-specific memory operations and a novel memory-guided branch prediction mechanism. It is implemented with LLVM and open-source XiangShan RISC-V processor over the FPGA platform. Experiments demonstrate that the CoroAMU compiler achieves a 1.51x speedup over state-of-the-art coroutine methods on Intel server processors. When combined with optimized hardware of decoupled memory access, it delivers 3.39x and 4.87x average performance improvements over the baseline processor on FPGA-emulated disaggregated systems under 200ns and 800ns latency respectively.