🤖 AI Summary
Real-time, accurate detection of sugar beet and monocot/dicot weeds is critical for intelligent precision spraying, yet existing models lack comprehensive evaluation across hardware platforms and operating conditions. Method: This study systematically benchmarks YOLOv9, YOLOv10, and RT-DETR for weed detection under realistic agricultural constraints—using a unified Agricultural-COCO dataset and evaluating across five model scales (nano to large) and five input resolutions (320–960 px) on GPU, CPU, and edge hardware (Jetson AGX Orin). Performance is quantified via mAP@0.5:0.95 and end-to-end inference latency. Results: YOLOv10-small achieves the optimal accuracy–speed trade-off on Jetson AGX Orin (28.6 mAP, 42 FPS); RT-DETR attains +2.1% mAP over YOLOv9 at 960 px but exceeds real-time latency budgets. We propose a co-design strategy for resolution and model size selection tailored to edge deployment, delivering reproducible empirical evidence and a practical deployment paradigm for lightweight agricultural AI.
📝 Abstract
This paper presents a comprehensive evaluation of state-of-the-art object detection models, including YOLOv9, YOLOv10, and RT-DETR, for the task of weed detection in smart-spraying applications focusing on three classes: Sugarbeet, Monocot, and Dicot. The performance of these models is compared based on mean Average Precision (mAP) scores and inference times on different GPU and CPU devices. We consider various model variations, such as nano, small, medium, large alongside different image resolutions (320px, 480px, 640px, 800px, 960px). The results highlight the trade-offs between inference time and detection accuracy, providing valuable insights for selecting the most suitable model for real-time weed detection. This study aims to guide the development of efficient and effective smart spraying systems, enhancing agricultural productivity through precise weed management.