FedVLMBench: Benchmarking Federated Fine-Tuning of Vision-Language Models

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated fine-tuning of vision-language models (VLMs) remains challenging in privacy-sensitive domains such as healthcare, where data decentralization and heterogeneity impede effective collaborative learning. Method: This work introduces the first systematic benchmark for federated VLM fine-tuning, encompassing two mainstream VLM architectures, four parameter-efficient fine-tuning strategies, five federated learning (FL) algorithms, and six cross-domain multimodal datasets—supporting both single- and multi-task privacy-preserving evaluation. A unified evaluation framework is proposed to systematically assess performance under varying data and system heterogeneity. Contribution/Results: Empirical analysis identifies a two-layer MLP connector jointly optimized with the large model as the optimal configuration; it further reveals, for the first time, that federated vision tasks exhibit significantly higher sensitivity to data heterogeneity than language tasks. The open-sourced toolkit and empirical guidelines delineate the performance boundaries of FL algorithms on VLMs, establishing critical interdependencies among architecture choice, adaptation strategy, and data heterogeneity—thereby advancing privacy-enhanced multimodal modeling.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have demonstrated remarkable capabilities in cross-modal understanding and generation by integrating visual and textual information. While instruction tuning and parameter-efficient fine-tuning methods have substantially improved the generalization of VLMs, most existing approaches rely on centralized training, posing challenges for deployment in domains with strict privacy requirements like healthcare. Recent efforts have introduced Federated Learning (FL) into VLM fine-tuning to address these privacy concerns, yet comprehensive benchmarks for evaluating federated fine-tuning strategies, model architectures, and task generalization remain lacking. In this work, we present extbf{FedVLMBench}, the first systematic benchmark for federated fine-tuning of VLMs. FedVLMBench integrates two mainstream VLM architectures (encoder-based and encoder-free), four fine-tuning strategies, five FL algorithms, six multimodal datasets spanning four cross-domain single-task scenarios and two cross-domain multitask settings, covering four distinct downstream task categories. Through extensive experiments, we uncover key insights into the interplay between VLM architectures, fine-tuning strategies, data heterogeneity, and multi-task federated optimization. Notably, we find that a 2-layer multilayer perceptron (MLP) connector with concurrent connector and LLM tuning emerges as the optimal configuration for encoder-based VLMs in FL. Furthermore, current FL methods exhibit significantly higher sensitivity to data heterogeneity in vision-centric tasks than text-centric ones, across both encoder-free and encoder-based VLM architectures. Our benchmark provides essential tools, datasets, and empirical guidance for the research community, offering a standardized platform to advance privacy-preserving, federated training of multimodal foundation models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating federated fine-tuning strategies for Vision-Language Models
Assessing model architectures and task generalization in FL settings
Addressing data heterogeneity in privacy-preserving multimodal training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning for Vision-Language Models
Benchmark with diverse VLM architectures
Optimal 2-layer MLP connector tuning
🔎 Similar Papers
No similar papers found.