🤖 AI Summary
To address high latency in multimodal real-time inference on edge devices—caused by tight coupling between dynamic sensing and model execution, and complex inter-modal dependencies—this paper proposes a fine-grained pipelined architecture. It decouples perception from computation to enable “process-on-arrival” data handling; introduces a lightweight temporal aggregation module to model cross-frame dependencies; incorporates an adaptive multimodal configuration optimizer that dynamically selects optimal modality subsets and computation paths; and pioneers a cross-modal speculative skipping mechanism to bypass redundant computations without compromising accuracy. Evaluated on a real-world UAV platform, the approach reduces end-to-end latency by 42.3% while maintaining task accuracy above 98.1%, significantly outperforming state-of-the-art methods. This work establishes a new paradigm for efficient multimodal inference in dynamic edge environments.
📝 Abstract
Real-time multimodal inference on resource-constrained edge devices is essential for applications such as autonomous driving, human-computer interaction, and mobile health. However, prior work often overlooks the tight coupling between sensing dynamics and model execution, as well as the complex inter-modality dependencies. In this paper, we propose MMEdge, an new on-device multi-modal inference framework based on pipelined sensing and encoding. Instead of waiting for complete sensor inputs, MMEdge decomposes the entire inference process into a sequence of fine-grained sensing and encoding units, allowing computation to proceed incrementally as data arrive. MMEdge also introduces a lightweight but effective temporal aggregation module that captures rich temporal dynamics across different pipelined units to maintain accuracy performance. Such pipelined design also opens up opportunities for fine-grained cross-modal optimization and early decision-making during inference. To further enhance system performance under resource variability and input data complexity, MMEdge incorporates an adaptive multimodal configuration optimizer that dynamically selects optimal sensing and model configurations for each modality under latency constraints, and a cross-modal speculative skipping mechanism that bypasses future units of slower modalities when early predictions reach sufficient confidence. We evaluate MMEdge using two public multimodal datasets and deploy it on a real-world unmanned aerial vehicle (UAV)-based multimodal testbed. The results show that MMEdge significantly reduces end-to-end latency while maintaining high task accuracy across various system and data dynamics.