🤖 AI Summary
This work addresses the limitation of existing 3D Gaussian splatting pruning methods, which rely on camera parameters or view-dependent information and thus struggle in camera-agnostic point cloud interchange scenarios. To overcome this, we propose a single-pass, post-training pruning approach that requires no camera information. Our method constructs neighborhood descriptors based solely on intrinsic Gaussian attributes and introduces a novel hybrid descriptor framework that jointly enforces structural and appearance consistency. Furthermore, we formulate pruning as a statistical evidence estimation problem, employing a Beta evidence model to quantify the reliability of each Gaussian via probabilistic confidence. Evaluated on ISO/IEC MPEG standard test sequences, our approach achieves significantly higher compression ratios while preserving high-fidelity reconstruction, outperforming current camera-dependent pruning strategies.
📝 Abstract
The pruning of 3D Gaussian splats is essential for reducing their complexity to enable efficient storage, transmission, and downstream processing. However, most of the existing pruning strategies depend on camera parameters, rendered images, or view-dependent measures. This dependency becomes a hindrance in emerging camera-agnostic exchange settings, where splats are shared directly as point-based representations (e.g., .ply). In this paper, we propose a camera-agnostic, one-shot, post-training pruning method for 3D Gaussian splats that relies solely on attribute-derived neighbourhood descriptors. As our primary contribution, we introduce a hybrid descriptor framework that captures structural and appearance consistency directly from the splat representation. Building on these descriptors, we formulate pruning as a statistical evidence estimation problem and introduce a Beta evidence model that quantifies per-splat reliability through a probabilistic confidence score.
Experiments conducted on standardized test sequences defined by the ISO/IEC MPEG Common Test Conditions (CTC) demonstrate that our approach achieves substantial pruning while preserving reconstruction quality, establishing a practical and generalizable alternative to existing camera-dependent pruning strategies.