Modality Inflation: Energy Characterization and Optimization Opportunities for MLLM Inference

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies “modality inflation”—a phenomenon in multimodal large language model (MLLM) inference where visual inputs induce substantial computational and energy overhead due to redundant visual encoding and excessively long visual token sequences. Method: Leveraging fine-grained power profiling on NVIDIA A100 GPUs, we conduct the first energy-efficiency attribution analysis across prefill, decoding, and visual encoding stages, revealing 17%–94% higher energy consumption versus text-only baselines, with bottlenecks varying heterogeneously across models (either in visual encoding or long-sequence processing). We propose a stage-adaptive Dynamic Voltage and Frequency Scaling (DVFS) strategy that achieves significant energy savings under latency constraints, incurring <8% throughput degradation. Contribution/Results: We formally define “modality inflation,” establish the first MLLM-specific energy-attribution methodology, and empirically validate the efficacy of architecture-aware, stage-specific energy optimization.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) are built on text-only LLMs by incorporating additional modalities, enabling multimodal understanding and a broader range of applications. However, these additions introduce a previously unexplored energy trade-off across modalities that remains poorly understood, as most prior work focuses on text-only models. In this paper, we examine modality inflation, a key source of inefficiency in which multimodal inputs increase inference workloads through extra encoding stages and expanded token sequences. We provide the first detailed, stage-level analysis of energy consumption in MLLM inference by breaking the pipeline into vision encoding, prefill, and decoding stages. Using four representative MLLMs evaluated on NVIDIA A100 GPU, we quantify the additional energy required for multimodal inference compared to text-only baselines, observing overheads ranging from 17% to 94% across models for identical inputs. Our results show that energy bottlenecks differ widely across model architectures, stemming either from compute-heavy vision encoders or from the downstream impact of large visual token sequences during prefill. By examining GPU power traces, we further uncover substantial GPU underutilization during multimodal execution and show that input complexity leads to markedly different energy scaling behaviors across models. Finally, we demonstrate that stage-wise dynamic voltage and frequency scaling (DVFS) is an effective optimization, allowing energy savings with only modest performance impact. Together, these findings offer practical insights and concrete guidance for designing more energy-efficient multimodal LLM serving systems.
Problem

Research questions and friction points this paper is trying to address.

Analyzes energy inefficiency in multimodal large language models
Quantifies energy overhead from vision encoding and token expansion
Proposes dynamic voltage scaling for energy-efficient MLLM inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stage-level energy analysis of MLLM inference
GPU underutilization identification via power traces
Stage-wise DVFS for energy-efficient optimization
🔎 Similar Papers
No similar papers found.
M
Mona Moghadampanah
Virginia Tech, Blacksburg, VA, USA
A
Adib Rezaei Shahmirzadi
Virginia Tech, Blacksburg, VA, USA
F
Farhana Amin
Virginia Tech, Blacksburg, VA, USA
Dimitrios S. Nikolopoulos
Dimitrios S. Nikolopoulos
John W. Hancock Professor of Engineering, IEEE Fellow, Virginia Tech
High Performance ComputingRuntime SystemsMemory Systems