🤖 AI Summary
Agriculture has long suffered from scarce multilingual speech data, modality fragmentation, and the absence of unified evaluation benchmarks—hindering the deployment of multimodal large models in intelligent agriculture. To address these challenges, we propose AgriOmnis, the first holistic agricultural multimodal framework. It introduces (i) the largest six-language agricultural speech dataset to date (492K synthetic + 1.4K real samples), (ii) AgriBench-Omni-2K—the first trilingual, trimodal agricultural benchmark, and (iii) a three-stage training paradigm: text injection → progressive cross-modal alignment → GRPO-based reinforcement learning. Experiments demonstrate substantial gains over general-purpose baselines in multilingual multimodal reasoning and real-world speech understanding. All models, datasets, benchmarks, and code are publicly released to advance sustainable AI adoption in low-resource agricultural regions.
📝 Abstract
Despite rapid advances in multimodal large language models, agricultural applications remain constrained by the lack of multilingual speech data, unified multimodal architectures, and comprehensive evaluation benchmarks. To address these challenges, we present AgriGPT-Omni, an agricultural omni-framework that integrates speech, vision, and text in a unified framework. First, we construct a scalable data synthesis and collection pipeline that converts agricultural texts and images into training data, resulting in the largest agricultural speech dataset to date, including 492K synthetic and 1.4K real speech samples across six languages. Second, based on this, we train the first agricultural omni-model via a three-stage paradigm: textual knowledge injection, progressive multimodal alignment, and GRPO-based reinforcement learning, enabling unified reasoning across languages and modalities. Third, we propose AgriBench-Omni-2K, the first tri-modal benchmark for agriculture, covering diverse speech-vision-text tasks and multilingual slices, with standardized protocols and reproducible tools. Experiments show that AgriGPT-Omni significantly outperforms general-purpose baselines on multilingual and multimodal reasoning as well as real-world speech understanding. All models, data, benchmarks, and code will be released to promote reproducible research, inclusive agricultural intelligence, and sustainable AI development for low-resource regions.