π€ AI Summary
Efficient and privacy-preserving inference of large generative models on resource-constrained end devices (e.g., mobile phones and laptops) remains challenging due to hardware heterogeneity, memory bottlenecks, and computational limitations. Method: This paper introduces the first cross-platform GPU inference framework, featuring a unified GPU API abstraction layer compatible with NVIDIA, AMD, Intel, and mainstream mobile GPUs; a novel lightweight kernel fusion and memory-aware dynamic tensor reuse mechanism; and support for FP16/INT4 quantization, dynamic tensor sharding, and cross-vendor driver adaptation. Contribution/Results: The framework enables real-time inference (>20 tokens/s) of billion-parameter generative models directly on end-device GPUsβscaling model capacity 10β100Γ beyond prior solutions. On Snapdragon 8 Gen3 and RTX 4060 Laptop GPUs, it achieves 90% end-to-end latency reduction and 10Γ throughput improvement, effectively overcoming terminal compute and memory constraints.
π Abstract
Driven by the advancements in generative AI, large machine learning models have revolutionized domains such as image processing, audio synthesis, and speech recognition. While server-based deployments remain the locus of peak performance, the imperative for on-device inference, necessitated by privacy and efficiency considerations, persists. Recognizing GPUs as the on-device ML accelerator with the widest reach, we present ML Drift--an optimized framework that extends the capabilities of state-of-the-art GPU-accelerated inference engines. ML Drift enables on-device execution of generative AI workloads which contain 10 to 100x more parameters than existing on-device generative AI models. ML Drift addresses intricate engineering challenges associated with cross-GPU API development, and ensures broad compatibility across mobile and desktop/laptop platforms, thereby facilitating the deployment of significantly more complex models on resource-constrained devices. Our GPU-accelerated ML/AI inference engine achieves an order-of-magnitude performance improvement relative to existing open-source GPU inference engines.