🤖 AI Summary
Fire detection in dynamic scenes faces challenges including severe illumination interference, low recall for small flames, and difficulty balancing accuracy and efficiency. To address these, this paper proposes a lightweight and efficient YOLO-based model. Methodologically, we introduce an Attention-guided Inverted Residual (AIR) module to adaptively enhance flame-specific features while suppressing noise, and a Dual-Pooling Downsampling Fusion (DPDF) module that combines learnable max–average pooling with multi-scale feature preservation to significantly improve small-flame detection. The model further integrates hybrid channel-spatial attention and a lightweight feature pyramid. With only 1.45M parameters (51.8% fewer than YOLOv8n) and 4.6G FLOPs (43.2% reduction), it achieves superior mAP₇₅—outperforming mainstream real-time models (YOLOv8n, v9t, v10n, v11n, v12n) by 1.3–5.5%.
📝 Abstract
Fire detection in dynamic environments faces continuous challenges, including the interference of illumination changes, many false detections or missed detections, and it is difficult to achieve both efficiency and accuracy. To address the problem of feature extraction limitation and information loss in the existing YOLO-based models, this study propose You Only Look Once for Fire Detection with Attention-guided Inverted Residual and Dual-pooling Downscale Fusion (YOLO-FireAD) with two core innovations: (1) Attention-guided Inverted Residual Block (AIR) integrates hybrid channel-spatial attention with inverted residuals to adaptively enhance fire features and suppress environmental noise; (2) Dual Pool Downscale Fusion Block (DPDF) preserves multi-scale fire patterns through learnable fusion of max-average pooling outputs, mitigating small-fire detection failures. Extensive evaluation on two public datasets shows the efficient performance of our model. Our proposed model keeps the sum amount of parameters (1.45M, 51.8% lower than YOLOv8n) (4.6G, 43.2% lower than YOLOv8n), and mAP75 is higher than the mainstream real-time object detection models YOLOv8n, YOL-Ov9t, YOLOv10n, YOLO11n, YOLOv12n and other YOLOv8 variants 1.3-5.5%.