Fast-COS: A Fast One-Stage Object Detector Based on Reparameterized Attention Vision Transformer for Autonomous Driving

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the longstanding trade-off between accuracy and inference speed in real-time autonomous driving perception, this paper proposes RAViT, a lightweight and efficient one-stage detector. We introduce a reparameterized attention vision transformer (RAViT) backbone that synergistically integrates RepMSDW (reparameterized multi-scale depthwise convolution) with RepSA (reparameterized self-attention), and design a high-speed RepFPN for efficient multi-scale feature fusion. Evaluated on BDD100K and TJU-DHD, RAViT achieves AP₅₀ scores of 57.2% and 80.0%, respectively. On GPU, it attains 75.9% higher throughput than FCOS; on edge devices, it delivers a 1.38× throughput improvement. Additionally, it achieves 81.4% Top-1 accuracy on ImageNet-1K. RAViT significantly advances the accuracy–efficiency Pareto frontier for deployment on resource-constrained vehicular platforms.

Technology Category

Application Category

📝 Abstract
The perception system is a a critical role of an autonomous driving system for ensuring safety. The driving scene perception system fundamentally represents an object detection task that requires achieving a balance between accuracy and processing speed. Many contemporary methods focus on improving detection accuracy but often overlook the importance of real-time detection capabilities when computational resources are limited. Thus, it is vital to investigate efficient object detection strategies for driving scenes. This paper introduces Fast-COS, a novel single-stage object detection framework crafted specifically for driving scene applications. The research initiates with an analysis of the backbone, considering both macro and micro architectural designs, yielding the Reparameterized Attention Vision Transformer (RAViT). RAViT utilizes Reparameterized Multi-Scale Depth-Wise Convolution (RepMSDW) and Reparameterized Self-Attention (RepSA) to enhance computational efficiency and feature extraction. In extensive tests across GPU, edge, and mobile platforms, RAViT achieves 81.4% Top-1 accuracy on the ImageNet-1K dataset, demonstrating significant throughput improvements over comparable backbone models such as ResNet, FastViT, RepViT, and EfficientFormer. Additionally, integrating RepMSDW into a feature pyramid network forms RepFPN, enabling fast and multi-scale feature fusion. Fast-COS enhances object detection in driving scenes, attaining an AP50 score of 57.2% on the BDD100K dataset and 80.0% on the TJU-DHD Traffic dataset. It surpasses leading models in efficiency, delivering up to 75.9% faster GPU inference and 1.38 higher throughput on edge devices compared to FCOS, YOLOF, and RetinaNet. These findings establish Fast-COS as a highly scalable and reliable solution suitable for real-time applications, especially in resource-limited environments like autonomous driving systems
Problem

Research questions and friction points this paper is trying to address.

Enhance real-time object detection accuracy
Optimize computational efficiency for autonomous driving
Develop scalable, resource-efficient detection framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reparameterized Attention Vision Transformer
Reparameterized Multi-Scale Depth-Wise Convolution
Fast One-Stage Object Detector