๐ค AI Summary
To address the challenges of strong multimodal instruction data heterogeneity and difficult cross-client collaboration in federated learning, this paper proposes the first federated instruction-tuning framework tailored for multimodal large language models (MLLMs). Methodologically: (1) we design a two-stage โadapter-on-adapterโ vision-language connector; (2) we introduce a cross-task Mixture-of-Adapters (CT-MoA) module to enable task-adaptive routing; and (3) we propose an adaptive text-parameter aggregation strategy based on Euclidean distance, balancing privacy preservation and knowledge transfer. Evaluated on two cross-task federated learning scenarios, our framework significantly mitigates task heterogeneity interference while ensuring data remains local. It consistently improves multimodal instruction understanding and generation performance over strong baselines, demonstrating both practical feasibility and effectiveness in privacy-preserving distributed multimodal learning.
๐ Abstract
In this paper, we explore a novel federated multimodal instruction tuning task(FedMIT), which is significant for collaboratively fine-tuning MLLMs on different types of multimodal instruction data on distributed devices. To solve the new task, we propose a federated multimodal instruction tuning framework(Pilot). Our framework integrates two stages of"adapter on adapter"into the connector of the vision encoder and the LLM. In stage 1, we extract task-specific features and client-specific features from visual information. In stage 2, we build the cross-task Mixture-of-Adapters(CT-MoA) module to perform cross-task interaction. Each client can not only capture personalized information of local data and learn task-related multimodal information, but also learn general knowledge from other tasks. In addition, we introduce an adaptive parameter aggregation strategy for text training parameters, which optimizes parameter aggregation by calculating weights based on the euclidean distance between parameters, so that parameter aggregation can benefit from positive effects to the greatest extent while effectively reducing negative effects. Our framework can collaboratively exploit distributed data from different local clients to learn cross-task knowledge without being affected by the task heterogeneity during instruction tuning. The effectiveness of our method is verified in two different cross-task scenarios.