AgriChat: A Multimodal Large Language Model for Agriculture Image Understanding

πŸ“… 2026-03-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the limitations of agricultural multimodal large models, which suffer from insufficient high-quality annotated data and reliable domain knowledge, leading to biological hallucinations and poor generalization. To overcome these challenges, the authors propose a Vision-to-Verified-Knowledge (V2VK) pipeline that introduces a novel mechanism integrating visual descriptions with web-augmented retrieval of plant pathology literature to automatically construct AgriMMβ€”the first verifiable, large-scale agricultural multimodal benchmark, encompassing over 3,000 categories and more than 600,000 visual question-answer pairs. Building upon this benchmark, they develop AgriChat, a specialized multimodal large model that significantly enhances accuracy and interpretability in tasks such as species identification, disease diagnosis, fruit counting, and maturity assessment, while effectively mitigating biological hallucinations.

Technology Category

Application Category

πŸ“ Abstract
The deployment of Multimodal Large Language Models (MLLMs) in agriculture is currently stalled by a critical trade-off: the existing literature lacks the large-scale agricultural datasets required for robust model development and evaluation, while current state-of-the-art models lack the verified domain expertise necessary to reason across diverse taxonomies. To address these challenges, we propose the Vision-to-Verified-Knowledge (V2VK) pipeline, a novel generative AI-driven annotation framework that integrates visual captioning with web-augmented scientific retrieval to autonomously generate the AgriMM benchmark, effectively eliminating biological hallucinations by grounding training data in verified phytopathological literature. The AgriMM benchmark contains over 3,000 agricultural classes and more than 607k VQAs spanning multiple tasks, including fine-grained plant species identification, plant disease symptom recognition, crop counting, and ripeness assessment. Leveraging this verifiable data, we present AgriChat, a specialized MLLM that presents broad knowledge across thousands of agricultural classes and provides detailed agricultural assessments with extensive explanations. Extensive evaluation across diverse tasks, datasets, and evaluation conditions reveals both the capabilities and limitations of current agricultural MLLMs, while demonstrating AgriChat's superior performance over other open-source models, including internal and external benchmarks. The results validate that preserving visual detail combined with web-verified knowledge constitutes a reliable pathway toward robust and trustworthy agricultural AI. The code and dataset are publicly available at https://github.com/boudiafA/AgriChat .
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Agricultural Image Understanding
Domain Expertise
Large-scale Dataset
Biological Hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Large Language Model
Vision-to-Verified-Knowledge
AgriMM benchmark
Agricultural AI
Web-augmented scientific retrieval
πŸ”Ž Similar Papers
No similar papers found.