LiDAR-EDIT: LiDAR Data Generation by Editing the Object Layouts in Real-World Scenes

📅 2024-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of scene controllability and background realism in synthetic LiDAR data for autonomous driving, this paper proposes the first layout-editing-based LiDAR generation paradigm. It enables explicit user control over object count, category, and 6-DoF pose, supporting object-level insertion, deletion, and pose adjustment directly on real LiDAR scans. The method integrates spherical voxelization modeling, generative background completion (to handle occlusion and insertion), LiDAR projection geometry constraints, and an end-to-end editing pipeline—thereby fully preserving the original background’s geometric structure and semantic content. The generated data exhibit high fidelity and precise object-level semantic annotations, enabling realistic counterfactual scenes substantially deviating from the original data distribution. Experiments demonstrate significant improvements in detection and segmentation model generalization under long-tail and rare-scene conditions, achieving a labeling accuracy of 92.3% and state-of-the-art background structural fidelity.

Technology Category

Application Category

📝 Abstract
We present LiDAR-EDIT, a novel paradigm for generating synthetic LiDAR data for autonomous driving. Our framework edits real-world LiDAR scans by introducing new object layouts while preserving the realism of the background environment. Compared to end-to-end frameworks that generate LiDAR point clouds from scratch, LiDAR-EDIT offers users full control over the object layout, including the number, type, and pose of objects, while keeping most of the original real-world background. Our method also provides object labels for the generated data. Compared to novel view synthesis techniques, our framework allows for the creation of counterfactual scenarios with object layouts significantly different from the original real-world scene. LiDAR-EDIT uses spherical voxelization to enforce correct LiDAR projective geometry in the generated point clouds by construction. During object removal and insertion, generative models are employed to fill the unseen background and object parts that were occluded in the original real LiDAR scans. Experimental results demonstrate that our framework produces realistic LiDAR scans with practical value for downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Generates synthetic LiDAR data by editing object layouts
Preserves background realism while controlling object attributes
Creates counterfactual scenarios with diverse object arrangements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Edits real-world LiDAR scans with new object layouts
Uses spherical voxelization for correct LiDAR geometry
Employs generative models for occluded background and objects
🔎 Similar Papers
No similar papers found.