VLLFL: A Vision-Language Model Based Lightweight Federated Learning Framework for Smart Agriculture

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of data privacy preservation and high communication overhead in cross-farm agricultural object detection, this paper pioneers the integration of vision-language models (VLMs) into agricultural federated learning, proposing a lightweight prompt-engineering-based collaborative training framework. The framework introduces a compact prompt generator deployed at the edge to adapt VLMs locally, enabling on-device knowledge extraction and cross-domain transfer while ensuring raw data never leaves the farm. It jointly optimizes model lightweighting and contextual awareness, supporting diverse tasks including fruit/vegetable recognition and pest detection. Experiments demonstrate a 14.53% improvement in VLM detection accuracy, a 99.3% reduction in communication cost, and substantial gains in distributed farmland intelligence—enhancing detection precision, bandwidth efficiency, and privacy guarantees.

Technology Category

Application Category

📝 Abstract
In modern smart agriculture, object detection plays a crucial role by enabling automation, precision farming, and monitoring of resources. From identifying crop health and pest infestations to optimizing harvesting processes, accurate object detection enhances both productivity and sustainability. However, training object detection models often requires large-scale data collection and raises privacy concerns, particularly when sensitive agricultural data is distributed across farms. To address these challenges, we propose VLLFL, a vision-language model-based lightweight federated learning framework (VLLFL). It harnesses the generalization and context-aware detection capabilities of the vision-language model (VLM) and leverages the privacy-preserving nature of federated learning. By training a compact prompt generator to boost the performance of the VLM deployed across different farms, VLLFL preserves privacy while reducing communication overhead. Experimental results demonstrate that VLLFL achieves 14.53% improvement in the performance of VLM while reducing 99.3% communication overhead. Spanning tasks from identifying a wide variety of fruits to detecting harmful animals in agriculture, the proposed framework offers an efficient, scalable, and privacy-preserving solution specifically tailored to agricultural applications.
Problem

Research questions and friction points this paper is trying to address.

Privacy concerns in distributed agricultural data collection
High communication overhead in federated learning systems
Limited object detection accuracy in smart agriculture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language model for context-aware detection
Lightweight federated learning for privacy preservation
Compact prompt generator to reduce communication overhead
🔎 Similar Papers
No similar papers found.
Long Li
Long Li
Research Staff Member, Inspur Group Co., Ltd.
Software Defined NetworkingNetwork Performance Optimization
J
Jiajia Li
Electrical and Computer Engineering, Michigan State University
D
Dong Chen
Agricultural and Biological Engineering, Mississippi State University
Lina Pu
Lina Pu
Department of Computer Science, University of Alabama
IoTIoT SecurityEdge Computing
H
Haibo Yao
USDA-ARS Genetics and Sustainbale Agriculture
Yanbo Huang
Yanbo Huang
USDA ARS
Precision AgricultureRemote SensingApplication Technology