Global Rice Multi-Class Segmentation Dataset (RiceSEG): A Comprehensive and Diverse High-Resolution RGB-Annotated Images for the Development and Benchmarking of Rice Segmentation Algorithms

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Field-based rice phenotyping is hindered by the difficulty of fine-grained organ segmentation and the scarcity of high-quality, pixel-level annotations. To address this, we introduce RiceSEG—the first large-scale, cross-national (5 countries), multi-cultivar (6,000+ genotypes), full-growth-cycle semantic segmentation benchmark for rice. It comprises 3,078 high-resolution RGB images with pixel-accurate annotations for six anatomical organs and common confounding objects (e.g., weeds, soil). RiceSEG fills a critical gap in crop-specific segmentation datasets. Using it, we systematically benchmark state-of-the-art models—including DeepLabv3+ and SegFormer—and identify significant performance bottlenecks: notably low mIoU on panicles, senescent leaves, and weeds (average improvement margin ≈22%), especially under complex reproductive-stage canopies. RiceSEG thus establishes a rigorous, community-standard evaluation framework for organ-level rice phenotyping and segmentation algorithm development.

Technology Category

Application Category

📝 Abstract
Developing computer vision-based rice phenotyping techniques is crucial for precision field management and accelerating breeding, thereby continuously advancing rice production. Among phenotyping tasks, distinguishing image components is a key prerequisite for characterizing plant growth and development at the organ scale, enabling deeper insights into eco-physiological processes. However, due to the fine structure of rice organs and complex illumination within the canopy, this task remains highly challenging, underscoring the need for a high-quality training dataset. Such datasets are scarce, both due to a lack of large, representative collections of rice field images and the time-intensive nature of annotation. To address this gap, we established the first comprehensive multi-class rice semantic segmentation dataset, RiceSEG. We gathered nearly 50,000 high-resolution, ground-based images from five major rice-growing countries (China, Japan, India, the Philippines, and Tanzania), encompassing over 6,000 genotypes across all growth stages. From these original images, 3,078 representative samples were selected and annotated with six classes (background, green vegetation, senescent vegetation, panicle, weeds, and duckweed) to form the RiceSEG dataset. Notably, the sub-dataset from China spans all major genotypes and rice-growing environments from the northeast to the south. Both state-of-the-art convolutional neural networks and transformer-based semantic segmentation models were used as baselines. While these models perform reasonably well in segmenting background and green vegetation, they face difficulties during the reproductive stage, when canopy structures are more complex and multiple classes are involved. These findings highlight the importance of our dataset for developing specialized segmentation models for rice and other crops.
Problem

Research questions and friction points this paper is trying to address.

Develop high-quality rice segmentation dataset for phenotyping.
Address challenges in rice organ segmentation due to complexity.
Benchmark segmentation models for diverse rice growth stages.
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-resolution RGB images for rice segmentation
Multi-class semantic segmentation dataset RiceSEG
CNN and transformer-based baseline models tested
🔎 Similar Papers
No similar papers found.
J
Junchi Zhou
Engineering Research Center of Plant Phenotyping, Ministry of Education, Jiangsu Collaborative Innovation Center for Modern Crop Production, Academy for Advanced Interdisciplinary Studies, Sanya Institute of Nanjing Agricultural University, Nanjing Agricultural University, Nanjing, China
H
Haozhou Wang
Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
Y
Yoichiro Kato
Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
T
Tejasri Nampally
Department of Artificial Intelligence, Indian Institute of Technology, Hyderabad, India
P
P. Rajalakshmi
Department of Electrical Engineering Indian Institute of Technology, Hyderabad, India
M
M. Balram
Institute of Biotechnology, Professor Jayashankar Telangana Agricultural State University, Hyderabad, India
K
Keisuke Katsura
Graduate School of Agriculture, Kyoto University, Kyoto, Japan
H
Hao Lu
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
Y
Yue Mu
Engineering Research Center of Plant Phenotyping, Ministry of Education, Jiangsu Collaborative Innovation Center for Modern Crop Production, Academy for Advanced Interdisciplinary Studies, Sanya Institute of Nanjing Agricultural University, Nanjing Agricultural University, Nanjing, China
W
Wanneng Yang
National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, and Hubei Key Laboratory of Agricultural Bioinformatics, Huazhong Agricultural University, Wuhan, China
Y
Yangmingrui Gao
Engineering Research Center of Plant Phenotyping, Ministry of Education, Jiangsu Collaborative Innovation Center for Modern Crop Production, Academy for Advanced Interdisciplinary Studies, Sanya Institute of Nanjing Agricultural University, Nanjing Agricultural University, Nanjing, China
F
Feng Xiao
Engineering Research Center of Plant Phenotyping, Ministry of Education, Jiangsu Collaborative Innovation Center for Modern Crop Production, Academy for Advanced Interdisciplinary Studies, Sanya Institute of Nanjing Agricultural University, Nanjing Agricultural University, Nanjing, China
H
Hongtao Chen
Engineering Research Center of Plant Phenotyping, Ministry of Education, Jiangsu Collaborative Innovation Center for Modern Crop Production, Academy for Advanced Interdisciplinary Studies, Sanya Institute of Nanjing Agricultural University, Nanjing Agricultural University, Nanjing, China
Y
Yuhao Chen
Engineering Research Center of Plant Phenotyping, Ministry of Education, Jiangsu Collaborative Innovation Center for Modern Crop Production, Academy for Advanced Interdisciplinary Studies, Sanya Institute of Nanjing Agricultural University, Nanjing Agricultural University, Nanjing, China
W
Wenjuan Li
State Key Laboratory of Efficient Utilization of Arid and Semi-arid Arable Land in Northern China, the Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing, China
J
Jingwen Wang
Center for Geospatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
F
Fenghua Yu
School of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, China
J
Jian Zhou
Rice Research Institute, Jilin Academy of Agricultural Sciences, Changchun, China
W
Wensheng Wang
Institute of Crop Sciences/National Key Facility for Crop Gene Resources and Genetic Improvement, Chinese Academy of Agricultural Sciences, Beijing, China
X
Xiaochun Hu
Yuan Long Ping High-Tech Agriculture Co., Ltd., Changsha, China
Y
Yuanzhu Yang
Yuan Long Ping High-Tech Agriculture Co., Ltd., Changsha, China
Yanfeng Ding
Yanfeng Ding
Nankai University
AI4Compression Large Language Models High-Performance Computing Bioinformatics
W
Wei Guo
Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
Shouyang Liu
Shouyang Liu
Professor, Nanjing Agricultural University
PhenotypingCrop modelingRemote sensing in agriculture