LINR-PCGC: Lossless Implicit Neural Representations for Point Cloud Geometry Compression

πŸ“… 2025-07-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing AI-based point cloud geometry compression methods suffer from poor generalization, being tightly coupled to specific training distributions; while implicit neural representation (INR)-based approaches improve generalization, they only support lossy geometric compression and face limitations in coding overhead and decoder footprint. This work proposes the first INR-based *lossless* point cloud geometry compression framework. We design a lightweight multi-scale SparseConv network that jointly performs scale-aware context extraction, child-node prediction, and model compression. Additionally, we introduce grouped hierarchical point cloud encoding and an efficient network initialization strategy to significantly accelerate encoding and reduce decoder size. Evaluated on the MVUB dataset, our method achieves bitrate reductions of 21.21% and 21.95% over G-PCC TMC13v23 and SparsePCGC, respectively, while cutting encoding time by ~60%. The framework is distribution-agnostic, computationally efficient, and yields compact decoders.

Technology Category

Application Category

πŸ“ Abstract
Existing AI-based point cloud compression methods struggle with dependence on specific training data distributions, which limits their real-world deployment. Implicit Neural Representation (INR) methods solve the above problem by encoding overfitted network parameters to the bitstream, resulting in more distribution-agnostic results. However, due to the limitation of encoding time and decoder size, current INR based methods only consider lossy geometry compression. In this paper, we propose the first INR based lossless point cloud geometry compression method called Lossless Implicit Neural Representations for Point Cloud Geometry Compression (LINR-PCGC). To accelerate encoding speed, we design a group of point clouds level coding framework with an effective network initialization strategy, which can reduce around 60% encoding time. A lightweight coding network based on multiscale SparseConv, consisting of scale context extraction, child node prediction, and model compression modules, is proposed to realize fast inference and compact decoder size. Experimental results show that our method consistently outperforms traditional and AI-based methods: for example, with the convergence time in the MVUB dataset, our method reduces the bitstream by approximately 21.21% compared to G-PCC TMC13v23 and 21.95% compared to SparsePCGC. Our project can be seen on https://huangwenjie2023.github.io/LINR-PCGC/.
Problem

Research questions and friction points this paper is trying to address.

Overcoming training data distribution dependence in point cloud compression
Achieving lossless compression with implicit neural representations
Reducing encoding time and decoder size for practical use
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group-level coding framework for faster encoding
Multiscale SparseConv network for lightweight decoding
Effective initialization strategy reducing encoding time
πŸ”Ž Similar Papers
Wenjie Huang
Wenjie Huang
Shanghai Jiao Tong University
η‚ΉδΊ‘εŽ‹ηΌ©θ§†ι’‘εŽ‹ηΌ©ε›ΎεƒεŽ‹ηΌ©
Q
Qi Yang
University of Missouri-Kansas City
S
Shuting Xia
Shanghai Jiao Tong University
H
He Huang
Shanghai Jiao Tong University
Z
Zhu Li
University of Missouri-Kansas City
Yiling Xu
Yiling Xu
Shanghai Jiaotong University