Can 3D point cloud data improve automated body condition score prediction in dairy cattle?

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of traditional body condition scoring (BCS) in dairy cattle, which relies on subjective human assessment and suffers from low efficiency, by systematically evaluating the performance of two 3D representations—depth images and 3D point clouds—for automated BCS prediction. Leveraging a field dataset of 1,020 cows, the authors conduct the first comprehensive comparison across four data processing configurations, varying data partitioning strategies, feature extraction methods, and model architectures, with individual-level cross-validation to ensure rigorous evaluation. Results demonstrate that depth images consistently match or outperform 3D point clouds across most settings. In contrast, point clouds show no consistent advantage and exhibit greater sensitivity to noise and model choice, particularly suffering significant performance degradation when used with handcrafted features. These findings provide empirical guidance for selecting 3D sensing modalities in intelligent livestock management systems.

Technology Category

Application Category

📝 Abstract
Body condition score (BCS) is a widely used indicator of body energy status and is closely associated with metabolic status, reproductive performance, and health in dairy cattle; however, conventional visual scoring is subjective and labor-intensive. Computer vision approaches have been applied to BCS prediction, with depth images widely used because they capture geometric information independent of coat color and texture. More recently, three-dimensional point cloud data have attracted increasing interest due to their ability to represent richer geometric characteristics of animal morphology, but direct head-to-head comparisons with depth image-based approaches remain limited. In this study, we compared top-view depth image and point cloud data for BCS prediction under four settings: 1) unsegmented raw data, 2) segmented full-body data, 3) segmented hindquarter data, and 4) handcrafted feature data. Prediction models were evaluated using data from 1,020 dairy cows collected on a commercial farm, with cow-level cross-validation to prevent data leakage. Depth image-based models consistently achieved higher accuracy than point cloud-based models when unsegmented raw data and segmented full-body data were used, whereas comparable performance was observed when segmented hindquarter data were used. Both depth image and point cloud approaches showed reduced accuracy when handcrafted feature data were employed compared with the other settings. Overall, point cloud-based predictions were more sensitive to noise and model architecture than depth image-based predictions. Taken together, these results indicate that three-dimensional point clouds do not provide a consistent advantage over depth images for BCS prediction in dairy cattle under the evaluated conditions.
Problem

Research questions and friction points this paper is trying to address.

body condition score
3D point cloud
depth image
dairy cattle
automated prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D point cloud
depth image
body condition score
dairy cattle
computer vision
🔎 Similar Papers
No similar papers found.
Z
Zhou Tang
Department of Animal Sciences, Institute of Food and Agricultural Sciences, University of Florida, Gainesville, FL, 32611 USA
J
Jin Wang
Department of Animal Sciences, Institute of Food and Agricultural Sciences, University of Florida, Gainesville, FL, 32611 USA
A
Angelo De Castro
Department of Animal Sciences, Institute of Food and Agricultural Sciences, University of Florida, Gainesville, FL, 32611 USA
Yuxi Zhang
Yuxi Zhang
University of Illinois, Urbana-Champaign
condensed matter physics
V
Victoria Bastos Primo
Department of Large Animal Clinical Sciences, University of Florida, Gainesville, FL, 32611 USA
A
Ana Beatriz Montevecchio Bernardino
Department of Large Animal Clinical Sciences, University of Florida, Gainesville, FL, 32611 USA
G
G. Morota
Laboratory of Biometry and Bioinformatics, Department of Agricultural and Environmental Biology, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Bunkyo, Tokyo 113-8657, Japan
Xu Wang
Xu Wang
University of Florida
Plant PhenomicsMachine VisionMachine LearningUnmanned Aerial Systems
R
R. C. Chebel
Department of Large Animal Clinical Sciences, University of Florida, Gainesville, FL, 32611 USA
H
Haipeng Yu
Department of Animal Sciences, Institute of Food and Agricultural Sciences, University of Florida, Gainesville, FL, 32611 USA