Been There, Scanned That: Nostalgia-Driven LiDAR Compression for Self-Driving Cars

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high transmission and storage costs associated with massive LiDAR point cloud data in autonomous vehicles, this paper proposes a nostalgia-inspired compression method leveraging long-term temporal redundancy. Unlike conventional inter-frame compression, our approach uniquely exploits historical point clouds collected over repeated traversals—across days and months—as reference data. By performing spatiotemporal alignment and 3D differential encoding, the current frame is represented as an incremental update relative to the historical reference; an optimized coding strategy further balances compression ratio and reconstruction fidelity. Experiments on two months of real-world LiDAR sequences demonstrate a 210× compression ratio with a mean reconstruction error below 15 cm—significantly outperforming state-of-the-art methods. This work establishes a novel paradigm for efficient cloud-cooperative processing of onboard point clouds.

Technology Category

Application Category

📝 Abstract
An autonomous vehicle can generate several terabytes of sensor data per day. A significant portion of this data consists of 3D point clouds produced by depth sensors such as LiDARs. This data must be transferred to cloud storage, where it is utilized for training machine learning models or conducting analyses, such as forensic investigations in the event of an accident. To reduce network and storage costs, this paper introduces DejaView. Although prior work uses interframe redundancies to compress data, DejaView searches for and uses redundancies on larger temporal scales (days and months) for more effective compression. We designed DejaView with the insight that the operating area of autonomous vehicles is limited and that vehicles mostly traverse the same routes daily. Consequently, the 3D data they collect daily is likely similar to the data they have captured in the past. To capture this, the core of DejaView is a diff operation that compactly represents point clouds as delta w.r.t. 3D data from the past. Using two months of LiDAR data, an end-to-end implementation of DejaView can compress point clouds by a factor of 210 at a reconstruction error of only 15 cm.
Problem

Research questions and friction points this paper is trying to address.

Compressing LiDAR data for autonomous vehicles
Reducing network and storage costs for 3D point clouds
Leveraging temporal redundancies over days and months
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compresses LiDAR data using temporal redundancies
Represents point clouds as deltas from past data
Achieves 210x compression with minimal reconstruction error
🔎 Similar Papers
No similar papers found.