🤖 AI Summary
Existing plant disease classification studies predominantly rely on the idealized PlantVillage dataset, resulting in poor generalization to real-world field images and hindering practical deployment. To address this domain gap, we propose a cross-domain diagnostic framework integrating Vision Transformers (ViT) and CLIP’s zero-shot learning capability. ViT enhances robustness against complex backgrounds, occlusions, and illumination variations inherent in field conditions, while CLIP leverages multimodal alignment between disease descriptions and visual features to enable zero-shot classification—requiring no fine-tuning and only natural-language disease labels. This design improves model interpretability and deployment flexibility. Experiments demonstrate that ViT significantly outperforms CNNs in domain adaptation tasks, achieving higher accuracy under distribution shift. Moreover, CLIP maintains strong discriminative performance on unseen disease classes and real-field images, validating its practical utility for frontline agricultural applications.
📝 Abstract
Recent advances in deep learning have enabled significant progress in plant disease classification using leaf images. Much of the existing research in this field has relied on the PlantVillage dataset, which consists of well-centered plant images captured against uniform, uncluttered backgrounds. Although models trained on this dataset achieve high accuracy, they often fail to generalize to real-world field images, such as those submitted by farmers to plant diagnostic systems. This has created a significant gap between published studies and practical application requirements, highlighting the necessity of investigating and addressing this issue. In this study, we investigate whether attention-based architectures and zero-shot learning approaches can bridge the gap between curated academic datasets and real-world agricultural conditions in plant disease classification. We evaluate three model categories: Convolutional Neural Networks (CNNs), Vision Transformers, and Contrastive Language-Image Pre-training (CLIP)-based zero-shot models. While CNNs exhibit limited robustness under domain shift, Vision Transformers demonstrate stronger generalization by capturing global contextual features. Most notably, CLIP models classify diseases directly from natural language descriptions without any task-specific training, offering strong adaptability and interpretability. These findings highlight the potential of zero-shot learning as a practical and scalable domain adaptation strategy for plant health diagnosis in diverse field environments.