AgriGPT-Omni: A Unified Speech-Vision-Text Framework for Multilingual Agricultural Intelligence

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Agriculture has long suffered from scarce multilingual speech data, modality fragmentation, and the absence of unified evaluation benchmarks—hindering the deployment of multimodal large models in intelligent agriculture. To address these challenges, we propose AgriOmnis, the first holistic agricultural multimodal framework. It introduces (i) the largest six-language agricultural speech dataset to date (492K synthetic + 1.4K real samples), (ii) AgriBench-Omni-2K—the first trilingual, trimodal agricultural benchmark, and (iii) a three-stage training paradigm: text injection → progressive cross-modal alignment → GRPO-based reinforcement learning. Experiments demonstrate substantial gains over general-purpose baselines in multilingual multimodal reasoning and real-world speech understanding. All models, datasets, benchmarks, and code are publicly released to advance sustainable AI adoption in low-resource agricultural regions.

Technology Category

Application Category

📝 Abstract
Despite rapid advances in multimodal large language models, agricultural applications remain constrained by the lack of multilingual speech data, unified multimodal architectures, and comprehensive evaluation benchmarks. To address these challenges, we present AgriGPT-Omni, an agricultural omni-framework that integrates speech, vision, and text in a unified framework. First, we construct a scalable data synthesis and collection pipeline that converts agricultural texts and images into training data, resulting in the largest agricultural speech dataset to date, including 492K synthetic and 1.4K real speech samples across six languages. Second, based on this, we train the first agricultural omni-model via a three-stage paradigm: textual knowledge injection, progressive multimodal alignment, and GRPO-based reinforcement learning, enabling unified reasoning across languages and modalities. Third, we propose AgriBench-Omni-2K, the first tri-modal benchmark for agriculture, covering diverse speech-vision-text tasks and multilingual slices, with standardized protocols and reproducible tools. Experiments show that AgriGPT-Omni significantly outperforms general-purpose baselines on multilingual and multimodal reasoning as well as real-world speech understanding. All models, data, benchmarks, and code will be released to promote reproducible research, inclusive agricultural intelligence, and sustainable AI development for low-resource regions.
Problem

Research questions and friction points this paper is trying to address.

Lacks multilingual speech data for agricultural AI applications
Absence of unified multimodal architectures for farming intelligence
Missing comprehensive evaluation benchmarks for agricultural AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified speech-vision-text framework for multilingual agriculture
Three-stage training with knowledge injection and multimodal alignment
Tri-modal benchmark with standardized protocols and tools
🔎 Similar Papers
No similar papers found.
B
Bo Yang
Zhejiang University
L
Lanfei Feng
Zhejiang University
Y
Yunkui Chen
Zhejiang University
Y
Yu Zhang
Zhejiang University
J
Jianyu Zhang
Zhejiang University
X
Xiao Xu
Zhejiang University
N
Nueraili Aierken
Zhejiang University
Shijian Li
Shijian Li
zhejiang university
pervasive computinghuman computer interactionartificial intelligence