FIN: Fast Inference Network for Map Segmentation

πŸ“… 2025-10-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Addressing the dual challenges of accuracy and real-time performance in multi-sensor semantic mapping for autonomous driving, this paper proposes an efficient BEV-space camera-radar fusion segmentation architecture. Methodologically, we design a lightweight segmentation head and introduce a class-balanced composite loss function to enable tight-coupling multimodal fusion directly at the BEV feature level. Compared to state-of-the-art approaches, our method significantly reduces model complexity and inference latency while preserving strong segmentation capability: it achieves 53.5 mIoU on mainstream benchmarks and improves inference speed by 260% over the strongest baseline. The proposed framework establishes a new paradigm for real-time, high-accuracy semantic map construction in autonomous vehicle systems.

Technology Category

Application Category

πŸ“ Abstract
Multi-sensor fusion in autonomous vehicles is becoming more common to offer a more robust alternative for several perception tasks. This need arises from the unique contribution of each sensor in collecting data: camera-radar fusion offers a cost-effective solution by combining rich semantic information from cameras with accurate distance measurements from radar, without incurring excessive financial costs or overwhelming data processing requirements. Map segmentation is a critical task for enabling effective vehicle behaviour in its environment, yet it continues to face significant challenges in achieving high accuracy and meeting real-time performance requirements. Therefore, this work presents a novel and efficient map segmentation architecture, using cameras and radars, in the acrfull{bev} space. Our model introduces a real-time map segmentation architecture considering aspects such as high accuracy, per-class balancing, and inference time. To accomplish this, we use an advanced loss set together with a new lightweight head to improve the perception results. Our results show that, with these modifications, our approach achieves results comparable to large models, reaching 53.5 mIoU, while also setting a new benchmark for inference time, improving it by 260% over the strongest baseline models.
Problem

Research questions and friction points this paper is trying to address.

Achieving real-time map segmentation for autonomous vehicles
Improving accuracy and efficiency in multi-sensor fusion
Balancing computational speed with semantic segmentation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Camera-radar fusion in BEV space
Lightweight head with advanced loss set
Real-time performance with high accuracy
πŸ”Ž Similar Papers
No similar papers found.
R
Ruan Bispo
Department of Electronic and Computer Engineering, University of Limerick, Limerick V94 T9PX, Ireland
Tim Brophy
Tim Brophy
University of Galway
R
Reenu Mohandas
Department of Electronic and Computer Engineering, University of Limerick, Limerick V94 T9PX, Ireland
A
Anthony Scanlan
Department of Electronic and Computer Engineering, University of Limerick, Limerick V94 T9PX, Ireland
CiarΓ‘n Eising
CiarΓ‘n Eising
University of Limerick
computer vision