Characterizing and Efficiently Accelerating Multimodal Generation Model Inference

📅 2024-09-30
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal generative models suffer from high GPU idle time and severe end-to-end latency due to dual bottlenecks—computationally intensive feed-forward network (FFN) linear operations and memory-bound attention key-value (KV) accesses—leading to inefficient resource utilization. Method: We conduct a system-level empirical analysis, identifying these bottlenecks as dominant latency factors. We propose a cross-stack co-optimization framework: dynamic scheduling at the application layer; KV cache compression and memory bandwidth-aware memory management at the system layer; and quantization-aware kernel fusion and operator-level parallelism at the hardware layer. Contribution/Results: This work breaks the traditional software-hardware decoupled optimization paradigm, establishing the first holistic hardware-software co-design acceleration methodology for multimodal generative models. Evaluation shows a 3.88× speedup in end-to-end inference latency, significantly reduced GPU idle time, and alleviated GPU memory bandwidth pressure—enabling efficient deployment at billion-user scale.

Technology Category

Application Category

📝 Abstract
Generative artificial intelligence (AI) technology is revolutionizing the computing industry. Not only its applications have broadened to various sectors but also poses new system design and optimization opportunities. The technology is capable of understanding and responding in multiple modalities. However, the advanced capability currently comes with significant system resource demands. To sustainably scale generative AI capabilities to billions of users in the world, inference must be fast and efficient. This paper pinpoints key system design and optimization opportunities by characterizing a family of emerging multi-modal generation models on real systems. Auto-regressive token generation is a critical latency performance bottleneck, typically dominated by GPU idle time. In addition to memory-intensive attention across the generative AI models, linear operations constitute significant inference latency due to the feed forward networks in Transformer-based models. We demonstrate that state-of-the-art optimization levers, spanning from applications to system software and hardware, set a 3.88x better baseline.
Problem

Research questions and friction points this paper is trying to address.

Characterizing multimodal generation model inference bottlenecks
Optimizing GPU idle time in autoregressive token generation
Reducing latency in Transformer-based models' linear operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Characterizing multi-modal generation models on real systems
Optimizing auto-regressive token generation to reduce GPU idle time
Applying optimization levers across applications, system software, and hardware
🔎 Similar Papers
No similar papers found.
Yejin Lee
Yejin Lee
Washington University School of Medicine
Anna Sun
Anna Sun
Meta AI Research
B
Basil Hosmer
AI Research at Meta
Bilge Acun
Bilge Acun
Research Scientist @ Meta / FAIR
Machine Learning SystemsSustainable ComputingDistributed Systems
C
Can Balioglu
AI Research at Meta
Changhan Wang
Changhan Wang
AI Research at Meta
C
Charles David Hernandez
AI Research at Meta
C
Christian Puhrsch
AI Research at Meta
Daniel Haziza
Daniel Haziza
Facebook AI Research (FAIR)
D
Driss Guessous
AI Research at Meta
Francisco Massa
Francisco Massa
Research Engineer at Facebook AI Research
Artificial IntelligenceComputer VisionMachine Learning
Jacob Kahn
Jacob Kahn
FAIR, Meta AI | University of Pennsylvania
machine learningdeep learningartificial intelligence
J
Jeffrey Wan
AI Research at Meta
J
Jeremy Reizenstein
AI Research at Meta
J
Jiaqi Zhai
AI Research at Meta
J
Joe Isaacson
AI Research at Meta
J
Joel Schlosser
AI Research at Meta
Juan Pino
Juan Pino
Meta
K
Kaushik Ram Sadagopan
AI Research at Meta
L
Leonid Shamis
AI Research at Meta
Linjian Ma
Linjian Ma
Research scientist, Meta Platforms, Inc.
Numerical AlgorithmsTensorsQuantum SimulationHigh Performance ComputingMachine Learning
Min-Jae Hwang
Min-Jae Hwang
AI Research at Meta
Mingda Chen
Mingda Chen
FAIR, Meta
Natural Language ProcessingMachine Learning
Mostafa Elhoushi
Mostafa Elhoushi
Research Scientist @ Cerebras Systems
Deep LearningMachine LearningSensorsQuantum Computing
P
Pedro Rodriguez
AI Research at Meta
R
Ramakanth Pasunuru
AI Research at Meta
S
Scott Yih
AI Research at Meta
Sravya Popuri
Sravya Popuri
AI Research at Meta
X
Xing Liu
AI Research at Meta
Carole-Jean Wu
Carole-Jean Wu
Meta AI / FAIR
Machine Learning SystemsComputer ArchitectureMemory Subsystem DesignEnergySustainability