🤖 AI Summary
This work addresses the low coding efficiency and high processing latency in lossless point cloud attribute compression. We propose a multi-scale hierarchical attention-based context model. Methodologically, we introduce a novel Level-of-Detail (LoD)-driven hierarchical self-attention mechanism enabling cross-scale, density-adaptive contextual modeling; integrate residual learning with joint coordinate–attribute normalization to achieve scale-invariant compression; and adopt a spatial tiling-based parallel encoding architecture to significantly reduce latency. Experimental results demonstrate superior compression ratios over the G-PCC standard for both color and reflectance attributes, alongside substantial improvements in encoding and decoding speed. To the best of our knowledge, this is the first method enabling real-time lossless communication and storage of high-fidelity point clouds.
📝 Abstract
In this paper, we propose a deep hierarchical attention context model for lossless attribute compression of point clouds, leveraging a multi-resolution spatial structure and residual learning. A simple and effective Level of Detail (LoD) structure is introduced to yield a coarse-to-fine representation. To enhance efficiency, points within the same refinement level are encoded in parallel, sharing a common context point group. By hierarchically aggregating information from neighboring points, our attention model learns contextual dependencies across varying scales and densities, enabling comprehensive feature extraction. We also adopt normalization for position coordinates and attributes to achieve scale-invariant compression. Additionally, we segment the point cloud into multiple slices to facilitate parallel processing, further optimizing time complexity. Experimental results demonstrate that the proposed method offers better coding performance than the latest G-PCC for color and reflectance attributes while maintaining more efficient encoding and decoding runtimes.