🤖 AI Summary
Frequent GPU out-of-memory (OOM) errors during multimodal large model training, coupled with poor generalizability of existing unimodal memory prediction methods, hinder efficient resource utilization and cause training interruptions. Method: This paper proposes the first layer-granularity GPU memory peak prediction framework tailored for multimodal models. It jointly models memory consumption across heterogeneous modalities—such as vision and language components—by parsing multimodal architectural heterogeneity, capturing inter-layer memory coupling, and integrating a training-trajectory-aware factorized estimation mechanism. Contribution/Results: Experimental evaluation demonstrates that the method achieves a mean absolute percentage error (MAPE) of only 8.7% across diverse multimodal tasks, significantly outperforming unimodal baselines. It effectively prevents OOM-induced training failures and enhances GPU resource utilization, enabling robust and scalable multimodal model training.
📝 Abstract
As deep learning models in agentic AI systems grow in scale and complexity, GPU memory requirements increase and often exceed the available GPU memory capacity, so that out-of-memory (OoM) errors occur. It is well known that OoM interrupts the whole training itself and wastes substantial computational resources. Therefore, to prevent OoM, accurate prediction of GPU memory usage is essential. However, previous studies focus only on unimodal architectures and fail to generalize to multimodal models, even though the multimodal models are a common choice in agentic AI systems. To address this limitation, we propose a framework that predicts the peak GPU memory usage by analyzing the model architecture and training behavior of multimodal models. Specifically, the framework decomposes the multimodal model into its constituent layers and applies factorization to estimate the memory usage of each layer. Our evaluation shows that our framework achieves high prediction accuracy of ~8.7% average MAPE.