π€ AI Summary
This work addresses the inefficiencies in existing distributed training frameworks for multimodal large language models, which overlook the heterogeneity of input data modalities, leading to imbalanced computational loads and suboptimal GPU utilization. To tackle this issue, the study introduces data-characteristic awareness into training schedulingβa novel approach that leverages runtime performance profiling and data feature modeling to construct a predictive scheduling policy. This policy enables dynamic load balancing across pipeline stages and micro-batches, effectively mitigating computation skew caused by modality disparities. Evaluated on large-scale multimodal benchmarks, the proposed method achieves up to a 3.6Γ speedup in training throughput compared to state-of-the-art distributed training frameworks.
π Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable advances by integrating text, image, and audio understanding within a unified architecture. However, existing distributed training frameworks remain fundamentally data-blind: they parallelize computation without accounting for variations in input data characteristics. This data unawareness leads to severe computation skew across stages and microbatches, where heterogeneous multimodal inputs incur different processing costs. Consequently, GPU resources are unevenly utilized, synchronization delays accumulate, and overall training efficiency degrades. To address this limitation, we present DFLOP, a data-driven framework for multimodal LLM training pipeline optimization. DFLOP continuously profiles runtime behavior to capture data-induced computation variance and employs predictive scheduling to balance workloads across stages and microbatches. By coupling data characteristics with execution planning, DFLOP substantially improves GPU utilization and throughput. Extensive experiments on large-scale multimodal benchmarks show that DFLOP achieves up to 3.6x faster training compared to state-of-the-art distributed training frameworks.