🤖 AI Summary
To address the CXL memory bandwidth bottleneck that limits large-model inference performance, this paper proposes CXL-NDP, a transparent near-data processing architecture. Without modifying the CXL.mem protocol or AI model implementations, CXL-NDP integrates scalable-precision bitplane layout and transparent lossless compression directly within CXL devices to enable dynamic quantization and in-situ processing of model weights and KV caches. This approach alleviates bandwidth constraints while preserving numerical fidelity: it achieves a 43% end-to-end inference throughput improvement, extends maximum context length by 87%, and reduces KV cache memory footprint by 46.9%, all with zero precision loss. The design incurs modest hardware overhead, ensuring practical deployability and scalability across diverse CXL-based accelerator systems.
📝 Abstract
Large language model (LLM) inference is bottlenecked by the limited bandwidth of CXL-based memory used for capacity expansion. We introduce CXL-NDP, a transparent near-data processing architecture that amplifies effective CXL bandwidth without requiring changes to the CXL.mem interface or AI models. CXL-NDP integrates a precision-scalable bit-plane layout for dynamic quantization with transparent lossless compression of weights and KV caches directly within the CXL device. In end-to-end serving, CXL-NDP improves throughput by 43%, extends the maximum context length by 87%, and reduces the KV cache footprint by 46.9% without accuracy loss. Hardware synthesis confirms its practicality with a modest silicon footprint, lowering the barrier for adopting efficient, scalable CXL-based memory in generative AI infrastructure.