🤖 AI Summary
To address low resource efficiency, strong inter-model dependencies, and poor deployment flexibility in edge-device multimodal multi-task inference, this paper proposes S2M3—a novel architecture enabling functional-level modular decomposition and cross-task module sharing in multimodal models for the first time. It introduces a greedy module placement algorithm and a request-level parallel routing strategy to jointly optimize computational and memory allocation. The approach supports efficient, low-latency, privacy-preserving on-device inference. Experiments across 95 deployment instances achieve optimal performance in 89 cases; compared to cloud-based inference, it reduces peak memory usage by up to 62% and end-to-end inference latency by 56.9%, with no accuracy degradation. The core contributions are: (i) a module-level decoupling and sharing mechanism for multimodal models, and (ii) a dynamic scheduling paradigm explicitly designed for edge resource constraints.
📝 Abstract
With the advancement of Artificial Intelligence (AI) towards multiple modalities (language, vision, speech, etc.), multi-modal models have increasingly been used across various applications (e.g., visual question answering or image generation/captioning). Despite the success of AI as a service for multi-modal applications, it relies heavily on clouds, which are constrained by bandwidth, latency, privacy concerns, and unavailability under network or server failures. While on-device AI becomes popular, supporting multiple tasks on edge devices imposes significant resource challenges. To address this, we introduce S2M3, a split-and-share multi-modal architecture for multi-task inference on edge devices. Inspired by the general-purpose nature of multi-modal models, which are composed of multiple modules (encoder, decoder, classifier, etc.), we propose to split multi-modal models at functional-level modules; and then share common modules to reuse them across tasks, thereby reducing resource usage. To address cross-model dependency arising from module sharing, we propose a greedy module-level placement with per-request parallel routing by prioritizing compute-intensive modules. Through experiments on a testbed consisting of 14 multi-modal models across 5 tasks and 10 benchmarks, we demonstrate that S2M3 can reduce memory usage by up to 50% and 62% in single-task and multi-task settings, respectively, without sacrificing accuracy. Furthermore, S2M3 achieves optimal placement in 89 out of 95 instances (93.7%) while reducing inference latency by up to 56.9% on resource-constrained devices, compared to cloud AI.