DepthCropSeg++: Scaling a Crop Segmentation Foundation Model With Depth-Labeled Data

📅 2026-01-18
🏛️ IEEE Journal on Selected Topics in Signal Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of cross-species and cross-environment generalization in crop segmentation under open-field conditions, where existing methods rely heavily on costly pixel-level annotations. To this end, the authors construct a large-scale, deeply annotated dataset encompassing over 30 crop species across 15 diverse environmental conditions. They propose a nearly unsupervised cross-species segmentation framework built upon an enhanced ViT-Adapter architecture, incorporating a dynamic upsampling module to improve fine-grained detail perception and a two-stage self-training strategy to efficiently leverage limited labeled data. The method achieves a state-of-the-art mIoU of 93.11% on a comprehensive test set, significantly outperforming both supervised baselines and the Segment Anything Model (SAM), particularly in challenging scenarios such as nighttime imaging, dense canopies, and unseen crop species—demonstrating, for the first time, the strong generalization capability of foundation models on large-scale real-world agricultural data.

Technology Category

Application Category

📝 Abstract
DepthCropSeg++: a foundation model for crop segmentation, capable of segmenting different crop species under open in-field environment. Crop segmentation is a fundamental task for modern agriculture, which closely relates to many downstream tasks such as plant phenotyping, density estimation, and weed control. In the era of foundation models, a number of generic large language and vision models have been developed. These models have demonstrated remarkable real world generalization due to significant model capacity and largescale datasets. However, current crop segmentation models mostly learn from limited data due to expensive pixel-level labelling cost, often performing well only under specific crop types or controlled environment. In this work, we follow the vein of our previous work DepthCropSeg, an almost unsupervised approach to crop segmentation, to scale up a cross-species and crossscene crop segmentation dataset, with 28,406 images across 30+ species and 15 environmental conditions. We also build upon a state-of-the-art semantic segmentation architecture ViT-Adapter architecture, enhance it with dynamic upsampling for improved detail awareness, and train the model with a two-stage selftraining pipeline. To systematically validate model performance, we conduct comprehensive experiments to justify the effectiveness and generalization capabilities across multiple crop datasets. Results demonstrate that DepthCropSeg++ achieves 93.11% mIoU on a comprehensive testing set, outperforming both supervised baselines and general-purpose vision foundation models like Segmentation Anything Model (SAM) by significant margins (+0.36% and +48.57% respectively). The model particularly excels in challenging scenarios including night-time environment (86.90% mIoU), high-density canopies (90.09% mIoU), and unseen crop varieties (90.09% mIoU), indicating a new state of the art for crop segmentation.
Problem

Research questions and friction points this paper is trying to address.

crop segmentation
foundation model
open-field environment
cross-species generalization
pixel-level labeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

foundation model
crop segmentation
depth-labeled data
dynamic upsampling
self-training
🔎 Similar Papers
No similar papers found.
J
Jiafei Zhang
MetaPheno Laboratory, Shanghai 201114, China; PhenoTrait Technology Co., Ltd., Beijing 100096, China
S
Songliang Cao
National Key Laboratory of Multispectral Information Intelligent Processing Technology, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China
B
Binghui Xu
Wuhan Digital Engineering Institute, Wuhan 430205, China
Y
Yanan Li
Hubei Key Laboratory of Intelligent Robot, School of Computer Science and Engineering, School of Artificial Intelligence, Wuhan Institute of Technology, Wuhan, 450205, China
W
Weiwei Jia
Beijing Agricultural Technology Extension Station, Beijing 100029, China
T
Tingting Wu
College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China
Hao Lu
Hao Lu
Associate Professor, Huazhong University of Science and Technology
Computer VisionDeep LearningPlant Phenotyping
W
Weijuan Hu
Laboratory of Advanced Breeding Technologies, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing 100101, China
Z
Zhiguo Han
MetaPheno Laboratory, Shanghai 201114, China; PhenoTrait Technology Co., Ltd., Beijing 100096, China