🤖 AI Summary
Single large language models (LLMs) struggle to simultaneously support diverse interactive multimodal applications (IMAs) and satisfy stringent resource constraints in mobile environments. Method: We propose a unified compositional-LLM paradigm that (i) constructs a task-dependency graph to model cross-task relationships; (ii) introduces ContextLoRA for task-guided progressive fine-tuning; and (iii) integrates ContextGear—a dynamic scheduling strategy combining mixture-of-experts, learnable matrix decomposition, staged fine-tuning (train-freeze-mask), and dynamic grouping—to enable structured multi-task learning and lightweight deployment. Results: Our approach significantly outperforms multi-model baselines on three standard benchmarks and is validated on a real-world wireless platform, achieving high-performance multimodal collaborative processing with low computational and communication overhead.
📝 Abstract
Interactive multimodal applications (IMAs), such as route planning in the Internet of Vehicles, enrich users' personalized experiences by integrating various forms of data over wireless networks. Recent advances in large language models (LLMs) utilize mixture-of-experts (MoE) mechanisms to empower multiple IMAs, with each LLM trained individually for a specific task that presents different business workflows. In contrast to existing approaches that rely on multiple LLMs for IMAs, this paper presents a novel paradigm that accomplishes various IMAs using a single compositional LLM over wireless networks. The two primary challenges include 1) guiding a single LLM to adapt to diverse IMA objectives and 2) ensuring the flexibility and efficiency of the LLM in resource-constrained mobile environments. To tackle the first challenge, we propose ContextLoRA, a novel method that guides an LLM to learn the rich structured context among IMAs by constructing a task dependency graph. We partition the learnable parameter matrix of neural layers for each IMA to facilitate LLM composition. Then, we develop a step-by-step fine-tuning procedure guided by task relations, including training, freezing, and masking phases. This allows the LLM to learn to reason among tasks for better adaptation, capturing the latent dependencies between tasks. For the second challenge, we introduce ContextGear, a scheduling strategy to optimize the training procedure of ContextLoRA, aiming to minimize computational and communication costs through a strategic grouping mechanism. Experiments on three benchmarks show the superiority of the proposed ContextLoRA and ContextGear. Furthermore, we prototype our proposed paradigm on a real-world wireless testbed, demonstrating its practical applicability for various IMAs. We will release our code to the community.