🤖 AI Summary
Lock contention in disaggregated memory (DM) severely congests memory-node NICs, causing up to orders-of-magnitude performance degradation. To address this, we propose a hybrid lock mechanism that combines decentralized coordination with centralized state maintenance, introducing the first collaborative queue-and-notify protocol: wait queues are atomically managed at memory nodes, while lightweight inter-compute-node notifications—enabled by RDMA atomic operations—facilitate efficient lock ownership transfer, ensuring fairness and drastically reducing NIC congestion. Our design integrates distributed notification, queued waiting management, and a unified optimization framework comparing MCS locks and spinlocks. Experimental evaluation shows throughput improvements of up to 43.37× over RDMA-based spinlocks and 1.81× over MCS locks; object-store and Sherman-index throughput increase by 35.60× and 2.31×, respectively; and 99th-percentile latency is reduced by up to 98.8%.
📝 Abstract
This paper reveals that locking can significantly degrade the performance of applications on disaggregated memory (DM), sometimes by several orders of magnitude, due to contention on the NICs of memory nodes (MN-NICs). To address this issue, we present DecLock, a locking mechanism for DM that employs decentralized coordination for ownership transfer across compute nodes (CNs) while retaining centralized state maintenance on memory nodes (MNs). DecLock features cooperative queue-notify locking that queues lock waiters on MNs atomically, enabling clients to transfer lock ownership via message-based notifications between CNs. This approach conserves MN-NIC resources for DM applications and ensures fairness. Evaluations show DecLock achieves throughput improvements of up to 43.37$ imes$ and 1.81$ imes$ over state-of-the-art RDMA-based spinlocks and MCS locks, respectively. Furthermore, DecLock helps two DM applications, including an object store and a real-world database index (Sherman), avoid performance degradation under high contention, improving throughput by up to 35.60$ imes$ and 2.31$ imes$ and reducing 99th-percentile latency by up to 98.8% and 82.1%.