Counting with Confidence: Accurate Pest Monitoring in Water Traps

📅 2025-05-19
🏛️ IFAC-PapersOnLine
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of unreliable pest counting in real-world deployment due to the absence of ground-truth annotations and difficulty in assessing result credibility, this paper proposes the first end-to-end confidence estimation framework for pest image counting. The method integrates multi-source features—including object detection outputs, image sharpness (measured by mean gradient magnitude), image quality and complexity, and spatial uniformity of pest distribution (quantified via adaptive DBSCAN)—into a multi-feature regression model. Crucially, it introduces a hypothesis-driven, multi-factor sensitivity analysis to identify the most discriminative evaluation metrics. Evaluated on a custom-built test set, the framework reduces mean squared error by 31.7% and improves the coefficient of determination (R²) by 15.2% over a baseline relying solely on detection outputs, demonstrating substantial gains in both accuracy and robustness of confidence estimation.

Technology Category

Application Category

📝 Abstract
Accurate pest population monitoring and tracking their dynamic changes are crucial for precision agriculture decision-making. A common limitation in existing vision-based automatic pest counting research is that models are typically evaluated on datasets with ground truth but deployed in real-world scenarios without assessing the reliability of counting results due to the lack of ground truth. To this end, this paper proposed a method for comprehensively evaluating pest counting confidence in the image, based on information related to counting results and external environmental conditions. First, a pest detection network is used for pest detection and counting, extracting counting result-related information. Then, the pest images undergo image quality assessment, image complexity assessment, and pest distribution uniformity assessment. And the changes in image clarity caused by stirring during image acquisition are quantified by calculating the average gradient magnitude. Notably, we designed a hypothesis-driven multi-factor sensitivity analysis method to select the optimal image quality assessment and image complexity assessment methods. And we proposed an adaptive DBSCAN clustering algorithm for pest distribution uniformity assessment. Finally, the obtained information related to counting results and external environmental conditions is input into a regression model for prediction, resulting in the final pest counting confidence. To the best of our knowledge, this is the first study dedicated to comprehensively evaluating counting confidence in counting tasks, and quantifying the relationship between influencing factors and counting confidence through a model. Experimental results show our method reduces MSE by 31.7% and improves R2 by 15.2% on the pest counting confidence test set, compared to the baseline built primarily on information related to counting results.
Problem

Research questions and friction points this paper is trying to address.

Develops a method to evaluate pest counting confidence in images
Addresses reliability gaps in vision-based pest monitoring without ground truth
Quantifies relationships between environmental factors and counting accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pest detection network for counting and extracting result-related information
Multi-factor sensitivity analysis for optimal image quality and complexity assessment
Adaptive DBSCAN clustering algorithm for pest distribution uniformity evaluation
🔎 Similar Papers
No similar papers found.
X
Xumin Gao
Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, UK
M
Mark Stevens
British Beet Research Organisation, Colney Lane, Norwich, UK
Grzegorz Cielniak
Grzegorz Cielniak
Professor of Robotics, University of Lincoln
Mobile RoboticsAgricultural RoboticsMachine VisionAIChronorobotics