๐ค AI Summary
Existing point cloud compression methods struggle to balance reliance on pretraining and the high computational overhead of implicit neural representations (INRs). This work proposes a hybrid framework that, for the first time, integrates a Pretrained Prior Network (PPN) with a Distribution-Agnostic Refiner (DAR). The PPN accelerates INR convergence, while the DAR is decomposed into a base layer and an enhancement layer, with only the latterโs parameters transmitted to reduce bitrate. Furthermore, supervised model compression is introduced to optimize the bit cost of the enhancement layer. Maintaining distribution-agnostic capability, the method achieves significant efficiency gains: it reduces bitrate by 20.43% over G-PCC on the 8iVFB dataset, outperforms UniPCGC by 57.85% on Cat1B, and yields an average 15.19% Bpp reduction compared to LINR-PCGC.
๐ Abstract
Learning-based point cloud compression presents superior performance to handcrafted codecs. However, pretrained-based methods, which are based on end-to-end training and expected to generalize to all the potential samples, suffer from training data dependency. Implicit neural representation (INR) based methods are distribution-agnostic and more robust, but they require time-consuming online training and suffer from the bitstream overhead from the overfitted model. To address these limitations, we propose HybridINR-PCGC, a novel hybrid framework that bridges the pretrained model and INR. Our framework retains distribution-agnostic properties while leveraging a pretrained network to accelerate convergence and reduce model overhead, which consists of two parts: the Pretrained Prior Network (PPN) and the Distribution Agnostic Refiner (DAR). We leverage the PPN, designed for fast inference and stable performance, to generate a robust prior for accelerating the DAR's convergence. The DAR is decomposed into a base layer and an enhancement layer, and only the enhancement layer needed to be packed into the bitstream. Finally, we propose a supervised model compression module to further supervise and minimize the bitrate of the enhancement layer parameters. Based on experiment results, HybridINR-PCGC achieves a significantly improved compression rate and encoding efficiency. Specifically, our method achieves a Bpp reduction of approximately 20.43% compared to G-PCC on 8iVFB. In the challenging out-of-distribution scenario Cat1B, our method achieves a Bpp reduction of approximately 57.85% compared to UniPCGC. And our method exhibits a superior time-rate trade-off, achieving an average Bpp reduction of 15.193% relative to the LINR-PCGC on 8iVFB.