🤖 AI Summary
Current understanding of the practical efficacy and scenario-specific suitability of multimodal large language models (MLLMs) remains insufficient, particularly regarding performance disparities between large and small models.
Method: This study systematically benchmarks large MLLMs (GPT-4V, GPT-4o) against compact counterparts (LLaVA series, Phi-3-Vision) across a novel multi-dimensional evaluation framework—spanning visual understanding, cross-modal alignment, and domain adaptation—and integrates standard benchmarks with real-world industrial and automotive tasks.
Contribution/Results: We identify a significant capability gap in complex reasoning—especially fine-grained recognition and long-horizon temporal modeling—between large and small MLLMs, and characterize shared failure modes. While small models achieve competitive performance on certain perception tasks, they consistently underperform in deep reasoning. These findings provide empirical guidance for MLLM selection, lightweight architecture design, and targeted bottleneck mitigation.
📝 Abstract
Large multimodal language models (MLLMs) such as GPT-4V and GPT-4o have achieved remarkable advancements in understanding and generating multimodal content, showcasing superior quality and capabilities across diverse tasks. However, their deployment faces significant challenges, including slow inference, high computational cost, and impracticality for on-device applications. In contrast, the emergence of small MLLMs, exemplified by the LLava-series models and Phi-3-Vision, offers promising alternatives with faster inference, reduced deployment costs, and the ability to handle domain-specific scenarios. Despite their growing presence, the capability boundaries between large and small MLLMs remain underexplored. In this work, we conduct a systematic and comprehensive evaluation to benchmark both small and large MLLMs, spanning general capabilities such as object recognition, temporal reasoning, and multimodal comprehension, as well as real-world applications in domains like industry and automotive. Our evaluation reveals that small MLLMs can achieve comparable performance to large models in specific scenarios but lag significantly in complex tasks requiring deeper reasoning or nuanced understanding. Furthermore, we identify common failure cases in both small and large MLLMs, highlighting domains where even state-of-the-art models struggle. We hope our findings will guide the research community in pushing the quality boundaries of MLLMs, advancing their usability and effectiveness across diverse applications.