F3OCUS - Federated Finetuning of Vision-Language Foundation Models with Optimal Client Layer Updating Strategy via Multi-objective Meta-Heuristics

πŸ“… 2024-11-17
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of federated fine-tuning of large vision-language models (VLMs) on resource-constrained clients, this paper proposes FΒ³OCUS: a novel framework that introduces hierarchical Neural Tangent Kernel (NTK) principal eigenvalue magnitude to quantify layer importance and explicitly models inter-client layer-wise diversity, formulating a data-agnostic, multi-objective co-optimization problem. It jointly optimizes importance and diversity using five metaheuristic algorithms and integrates parameter-efficient fine-tuning (PEFT) for lightweight federated adaptation. Contributions include: (1) a new layer importance metric grounded in NTK spectral analysis; (2) MedVQA-FLβ€”the first federated benchmark for medical visual question answering; and (3) comprehensive evaluation across six task categories, 58 medical imaging datasets, and four VLM architectures, with over 10,000 client-level experiments demonstrating significant improvements in accuracy and generalization while reducing communication and computational overhead.

Technology Category

Application Category

πŸ“ Abstract
Effective training of large Vision-Language Models (VLMs) on resource-constrained client devices in Federated Learning (FL) requires the usage of parameter-efficient fine-tuning (PEFT) strategies. To this end, we demonstrate the impact of two factors extit{viz.}, client-specific layer importance score that selects the most important VLM layers for fine-tuning and inter-client layer diversity score that encourages diverse layer selection across clients for optimal VLM layer selection. We first theoretically motivate and leverage the principal eigenvalue magnitude of layerwise Neural Tangent Kernels and show its effectiveness as client-specific layer importance score. Next, we propose a novel layer updating strategy dubbed F$^3$OCUS that jointly optimizes the layer importance and diversity factors by employing a data-free, multi-objective, meta-heuristic optimization on the server. We explore 5 different meta-heuristic algorithms and compare their effectiveness for selecting model layers and adapter layers towards PEFT-FL. Furthermore, we release a new MedVQA-FL dataset involving overall 707,962 VQA triplets and 9 modality-specific clients and utilize it to train and evaluate our method. Overall, we conduct more than 10,000 client-level experiments on 6 Vision-Language FL task settings involving 58 medical image datasets and 4 different VLM architectures of varying sizes to demonstrate the effectiveness of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Optimizing federated fine-tuning of vision-language models on resource-constrained devices
Balancing client-specific layer importance and inter-client diversity for optimal VLM updates
Enhancing parameter-efficient FL via meta-heuristic layer selection strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Client-specific layer importance score via Neural Tangent Kernels
Multi-objective meta-heuristic optimization for layer selection
Parameter-efficient fine-tuning in federated learning
πŸ”Ž Similar Papers
No similar papers found.