Rethinking Plant Disease Diagnosis: Bridging the Academic-Practical Gap with Vision Transformers and Zero-Shot Learning

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing plant disease classification studies predominantly rely on the idealized PlantVillage dataset, resulting in poor generalization to real-world field images and hindering practical deployment. To address this domain gap, we propose a cross-domain diagnostic framework integrating Vision Transformers (ViT) and CLIP’s zero-shot learning capability. ViT enhances robustness against complex backgrounds, occlusions, and illumination variations inherent in field conditions, while CLIP leverages multimodal alignment between disease descriptions and visual features to enable zero-shot classification—requiring no fine-tuning and only natural-language disease labels. This design improves model interpretability and deployment flexibility. Experiments demonstrate that ViT significantly outperforms CNNs in domain adaptation tasks, achieving higher accuracy under distribution shift. Moreover, CLIP maintains strong discriminative performance on unseen disease classes and real-field images, validating its practical utility for frontline agricultural applications.

Technology Category

Application Category

📝 Abstract
Recent advances in deep learning have enabled significant progress in plant disease classification using leaf images. Much of the existing research in this field has relied on the PlantVillage dataset, which consists of well-centered plant images captured against uniform, uncluttered backgrounds. Although models trained on this dataset achieve high accuracy, they often fail to generalize to real-world field images, such as those submitted by farmers to plant diagnostic systems. This has created a significant gap between published studies and practical application requirements, highlighting the necessity of investigating and addressing this issue. In this study, we investigate whether attention-based architectures and zero-shot learning approaches can bridge the gap between curated academic datasets and real-world agricultural conditions in plant disease classification. We evaluate three model categories: Convolutional Neural Networks (CNNs), Vision Transformers, and Contrastive Language-Image Pre-training (CLIP)-based zero-shot models. While CNNs exhibit limited robustness under domain shift, Vision Transformers demonstrate stronger generalization by capturing global contextual features. Most notably, CLIP models classify diseases directly from natural language descriptions without any task-specific training, offering strong adaptability and interpretability. These findings highlight the potential of zero-shot learning as a practical and scalable domain adaptation strategy for plant health diagnosis in diverse field environments.
Problem

Research questions and friction points this paper is trying to address.

Bridging the academic-practical gap in plant disease classification
Addressing poor generalization from lab datasets to real-world field images
Investigating zero-shot learning for adaptable plant disease diagnosis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Transformers capture global contextual features
CLIP models use natural language descriptions
Zero-shot learning enables domain adaptation strategy
🔎 Similar Papers
No similar papers found.
W
Wassim Benabbas
Department of Computer Science, Mohamed El Bachir El Ibrahimi University, Bordj Bou Arreridj, Algeria..
Mohammed Brahimi
Mohammed Brahimi
Technical University of Munich
3D Computer Vision
S
Samir Akhrouf
Laboratory of Informatics and its Applications of M’sila, Mohamed Boudiaf University, M’Sila, Algeria.
B
Bilal Fortas
Department of agricultural science, Mohamed El Bachir El Ibrahimi University, Bordj Bou Arreridj, Algeria..