🤖 AI Summary
This work addresses the inefficiency in YOLO training caused by uniformly processing all images regardless of their individual learning states. To overcome this limitation, the authors propose the Anti-Forgetting Sampling Strategy (AFSS), which integrates dynamic image importance assessment and an anti-forgetting mechanism into the YOLO training pipeline for the first time. AFSS dynamically evaluates the learning sufficiency of each image based on the minimum of detection recall and precision, categorizing samples into easy, medium, and hard groups. It then employs stratified sampling with periodic updates to enable adaptive training. Experiments across multiple benchmarks—including MS COCO, PASCAL VOC, DOTA-v1.0, and DIOR-R—demonstrate that AFSS accelerates YOLO training by at least 1.43× while achieving slight improvements in accuracy.
📝 Abstract
YOLO detectors are known for their fast inference speed, yet training them remains unexpectedly time-consuming due to their exhaustive pipeline that processes every training image in every epoch, even when many images have already been sufficiently learned. This stands in clear contrast to the efficiency suggested by the ``You Only Look Once'' philosophy. This naturally raises an important question: \textit{Does YOLO really need to see every training image in every epoch?} To explore this, we propose an Anti-Forgetting Sampling Strategy (AFSS) that dynamically determines which images should be used and which can be skipped during each epoch, allowing the detector to learn more effectively and efficiently. Specifically, AFSS measures the learning sufficiency of each training image as the minimum of its detection recall and precision, and dynamically categorizes training images into easy, medium, or hard levels accordingly. Easy training images are sparsely resampled during training in a continuous review manner, with priority given to those that have not been used for a long time to reduce redundancy and prevent forgetting. Moderate training images are partially selected, prioritizing recently unused ones and randomly choosing the rest from unselected images to ensure coverage and prevent forgetting. Hard training images are fully sampled in every epoch to ensure sufficient learning. The learning sufficiency of each training image is periodically updated, enabling detectors to adaptively shift its focus toward the informative training images over time while progressively discarding redundant ones. On widely used natural image detection benchmarks (MS COCO 2017 and PASCAL VOC 2007) and remote sensing detection datasets (DOTA-v1.0 and DIOR-R), AFSS achieves more than $1.43\times$ training speedup for YOLO-series detectors while also improving accuracy.