LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for autonomous robot navigation

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Lightweight yet accurate dynamic obstacle perception remains challenging for indoor autonomous robot navigation. Method: This paper proposes an embedded-friendly cross-modal LiDAR-visual fusion framework. It employs a robust feature-level sensor fusion strategy, integrates lightweight multi-detectors, and unifies detection, association, tracking, and classification via feature matching and Kalman filtering. To our knowledge, it is the first to achieve real-time dynamic obstacle state estimation on resource-constrained embedded platforms (Jetson AGX Orin). Contribution/Results: The method outperforms mainstream benchmarks on public datasets. Real-world quadrotor experiments demonstrate end-to-end inference latency under 30 ms, enabling safe, real-time navigation in dynamic indoor environments.

Technology Category

Application Category

📝 Abstract
Accurate perception of dynamic obstacles is essential for autonomous robot navigation in indoor environments. Although sophisticated 3D object detection and tracking methods have been investigated and developed thoroughly in the fields of computer vision and autonomous driving, their demands on expensive and high-accuracy sensor setups and substantial computational resources from large neural networks make them unsuitable for indoor robotics. Recently, more lightweight perception algorithms leveraging onboard cameras or LiDAR sensors have emerged as promising alternatives. However, relying on a single sensor poses significant limitations: cameras have limited fields of view and can suffer from high noise, whereas LiDAR sensors operate at lower frequencies and lack the richness of visual features. To address this limitation, we propose a dynamic obstacle detection and tracking framework that uses both onboard camera and LiDAR data to enable lightweight and accurate perception. Our proposed method expands on our previous ensemble detection approach, which integrates outputs from multiple low-accuracy but computationally efficient detectors to ensure real-time performance on the onboard computer. In this work, we propose a more robust fusion strategy that integrates both LiDAR and visual data to enhance detection accuracy further. We then utilize a tracking module that adopts feature-based object association and the Kalman filter to track and estimate detected obstacles' states. Besides, a dynamic obstacle classification algorithm is designed to robustly identify moving objects. The dataset evaluation demonstrates a better perception performance compared to benchmark methods. The physical experiments on a quadcopter robot confirms the feasibility for real-world navigation.
Problem

Research questions and friction points this paper is trying to address.

Detects and tracks dynamic obstacles using LiDAR and camera data.
Enhances detection accuracy with robust sensor fusion strategy.
Ensures real-time performance for autonomous indoor robot navigation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines LiDAR and camera for obstacle detection
Uses feature-based object association and Kalman filter
Implements dynamic obstacle classification algorithm
🔎 Similar Papers
No similar papers found.