Learning to Inference Adaptively for Multimodal Large Language Models

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from high inference overhead and struggle to adapt to dynamic resource constraints—such as runtime latency budget fluctuations and system-level resource contention. Method: This paper proposes AdaLLaVA, an adaptive inference framework that enables MLLMs to jointly perceive input semantics and real-time latency constraints during inference, supporting online model restructuring. It integrates reinforcement learning–driven dynamic operation scheduling, latency-aware runtime reconfiguration, and multi-granularity computational path control, while allowing plug-and-play incorporation of cross-model generalization and token pruning mechanisms. Contribution/Results: AdaLLaVA strictly satisfies user-specified latency budgets across question answering, reasoning, and hallucination benchmarks, achieving Pareto-optimal accuracy–latency trade-offs. It improves throughput by up to 4.2× and maintains compatibility with diverse mainstream MLLM architectures.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have shown impressive capabilities in reasoning, yet come with substantial computational cost, limiting their deployment in resource-constrained settings. Despite recent efforts on improving the efficiency of MLLMs, prior solutions fall short in responding to varying runtime conditions, in particular changing resource availability (e.g., contention due to the execution of other programs on the device). To bridge this gap, we introduce AdaLLaVA, an adaptive inference framework that learns to dynamically reconfigure operations in an MLLM during inference, accounting for the input data and a latency budget. We conduct extensive experiments across benchmarks involving question-answering, reasoning, and hallucination. Our results show that AdaLLaVA effectively adheres to input latency budget, achieving varying accuracy and latency tradeoffs at runtime. Further, we demonstrate that AdaLLaVA adapts to both input latency and content, can be integrated with token selection for enhanced efficiency, and generalizes across MLLMs.Our project webpage with code release is at https://zhuoyan-xu.github.io/ada-llava/.
Problem

Research questions and friction points this paper is trying to address.

Addresses high computational cost of Multimodal Large Language Models
Improves efficiency under varying runtime resource conditions
Introduces adaptive inference for dynamic operation reconfiguration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive inference framework for MLLMs
Dynamic reconfiguration based on input data
Latency budget adherence with accuracy tradeoffs
🔎 Similar Papers
No similar papers found.