🤖 AI Summary
In tropical tuna purse-seine fisheries, the high visual similarity between bigeye tuna (BET) and yellowfin tuna (YFT) poses significant challenges for accurate species identification in electronic monitoring (EM) imagery. Method: This paper proposes a hierarchical multi-stage deep learning framework: (1) YOLOv9 and SAM2 are fused for high-precision catch instance segmentation; (2) ByteTrack is employed for individual-level object tracking; and (3) a hierarchical classification model is designed to enhance fine-grained BET/YFT discrimination. Concurrently, an expert-consensus-enhanced, high-fidelity annotation dataset is constructed to mitigate inter-annotator inconsistency. Contribution/Results: On the validation set, the system achieves 84.8% accuracy in individual-level segmentation and classification, with a mean absolute error of only 4.5%, substantially outperforming conventional methods and state-of-the-art multi-model approaches. This work delivers a practical, real-time, and highly accurate technical solution for sustainable tuna stock monitoring.
📝 Abstract
Purse seiners play a crucial role in tuna fishing, as approximately 69% of the world's tropical tuna is caught using this gear. All tuna Regional Fisheries Management Organizations have established minimum standards to use electronic monitoring (EM) in fisheries in addition to traditional observers. The EM systems produce a massive amount of video data that human analysts must process. Integrating artificial intelligence (AI) into their workflow can decrease that workload and improve the accuracy of the reports. However, species identification still poses significant challenges for AI, as achieving balanced performance across all species requires appropriate training data. Here, we quantify the difficulty experts face to distinguish bigeye tuna (BET, Thunnus Obesus) from yellowfin tuna (YFT, Thunnus Albacares) using images captured by EM systems. We found inter-expert agreements of 42.9% $pm$ 35.6% for BET and 57.1% $pm$ 35.6% for YFT. We then present a multi-stage pipeline to estimate the species composition of the catches using a reliable ground-truth dataset based on identifications made by observers on board. Three segmentation approaches are compared: Mask R-CNN, a combination of DINOv2 with SAM2, and a integration of YOLOv9 with SAM2. We found that the latest performs the best, with a validation mean average precision of 0.66 $pm$ 0.03 and a recall of 0.88 $pm$ 0.03. Segmented individuals are tracked using ByteTrack. For classification, we evaluate a standard multiclass classification model and a hierarchical approach, finding a superior generalization by the hierarchical. All our models were cross-validated during training and tested on fishing operations with fully known catch composition. Combining YOLOv9-SAM2 with the hierarchical classification produced the best estimations, with 84.8% of the individuals being segmented and classified with a mean average error of 4.5%.