CottonSim: Development of an autonomous visual-guided robotic cotton-picking system in the Gazebo

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address autonomous cotton harvesting in agricultural fields, this paper introduces the first end-to-end Gazebo-based simulation benchmark tailored for agricultural harvesting. Built upon the Husky mobile robot platform and our custom Cotton-Eye vision perception system, the framework integrates RGB-D sensing, YOLOv8n-seg instance segmentation (mAP: 85.2%), SLAM-based mapping, and GPS/IMU sensor fusion for robust localization. It enables dual-mode navigation—map-based and GPS-guided—within a custom ROS-based virtual cotton field environment. The proposed method achieves a closed-loop pipeline for vision-guided autonomous navigation, real-time cotton detection, localization, and harvesting. Experimental results show a 96.7% success rate for map-based navigation (position error < 0.25 m) and 100% success for GPS-based navigation (angular error < 5×10⁻⁶°). All code, trained models, and the virtual environment are publicly released, establishing a reproducible, extensible simulation baseline for agricultural robotics algorithm development and evaluation.

Technology Category

Application Category

📝 Abstract
In this study, an autonomous visual-guided robotic cotton-picking system, built on a Clearpath's Husky robot platform and the Cotton-Eye perception system, was developed in the Gazebo robotic simulator. Furthermore, a virtual cotton farm was designed and developed as a Robot Operating System (ROS 1) package to deploy the robotic cotton picker in the Gazebo environment for simulating autonomous field navigation. The navigation was assisted by the map coordinates and an RGB-depth camera, while the ROS navigation algorithm utilized a trained YOLOv8n-seg model for instance segmentation. The model achieved a desired mean Average Precision (mAP) of 85.2%, a recall of 88.9%, and a precision of 93.0% for scene segmentation. The developed ROS navigation packages enabled our robotic cotton-picking system to autonomously navigate through the cotton field using map-based and GPS-based approaches, visually aided by a deep learning-based perception system. The GPS-based navigation approach achieved a 100% completion rate (CR) with a threshold of 5 x 10^-6 degrees, while the map-based navigation approach attained a 96.7% CR with a threshold of 0.25 m. This study establishes a fundamental baseline of simulation for future agricultural robotics and autonomous vehicles in cotton farming and beyond. CottonSim code and data are released to the research community via GitHub: https://github.com/imtheva/CottonSim
Problem

Research questions and friction points this paper is trying to address.

Develop autonomous visual-guided robotic cotton-picking system
Simulate cotton farm navigation using ROS and Gazebo
Achieve high-precision segmentation for field navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autonomous visual-guided robotic cotton-picking system
ROS navigation with YOLOv8n-seg model
Simulated cotton farm in Gazebo environment
🔎 Similar Papers
No similar papers found.
T
Thevathayarajh Thayananthan
X
Xin Zhang
Yanbo Huang
Yanbo Huang
USDA ARS
Precision AgricultureRemote SensingApplication Technology
Jingdao Chen
Jingdao Chen
Mississippi State University
BIMConstruction RoboticsArtificial IntelligenceComputer Vision
N
N. Wijewardane
V
Vitor S. Martins
G
G. D. Chesser
C
Christopher T. Goodin