🤖 AI Summary
Multimodal generative models suffer from high GPU idle time and severe end-to-end latency due to dual bottlenecks—computationally intensive feed-forward network (FFN) linear operations and memory-bound attention key-value (KV) accesses—leading to inefficient resource utilization.
Method: We conduct a system-level empirical analysis, identifying these bottlenecks as dominant latency factors. We propose a cross-stack co-optimization framework: dynamic scheduling at the application layer; KV cache compression and memory bandwidth-aware memory management at the system layer; and quantization-aware kernel fusion and operator-level parallelism at the hardware layer.
Contribution/Results: This work breaks the traditional software-hardware decoupled optimization paradigm, establishing the first holistic hardware-software co-design acceleration methodology for multimodal generative models. Evaluation shows a 3.88× speedup in end-to-end inference latency, significantly reduced GPU idle time, and alleviated GPU memory bandwidth pressure—enabling efficient deployment at billion-user scale.
📝 Abstract
Generative artificial intelligence (AI) technology is revolutionizing the computing industry. Not only its applications have broadened to various sectors but also poses new system design and optimization opportunities. The technology is capable of understanding and responding in multiple modalities. However, the advanced capability currently comes with significant system resource demands. To sustainably scale generative AI capabilities to billions of users in the world, inference must be fast and efficient. This paper pinpoints key system design and optimization opportunities by characterizing a family of emerging multi-modal generation models on real systems. Auto-regressive token generation is a critical latency performance bottleneck, typically dominated by GPU idle time. In addition to memory-intensive attention across the generative AI models, linear operations constitute significant inference latency due to the feed forward networks in Transformer-based models. We demonstrate that state-of-the-art optimization levers, spanning from applications to system software and hardware, set a 3.88x better baseline.