🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) employs uniform resource allocation, leading to suboptimal fidelity in regions of interest (ROIs) and causing model redundancy and increased rendering overhead. To address this, we propose an object-aware local refinement framework: it dynamically focuses training on ROIs via object-guided camera selection and targeted optimization; introduces high-fidelity local reconstruction coupled with multi-scale global fusion to jointly enhance geometric and appearance details while compressing the model. Our method preserves real-time rendering capability while significantly improving local reconstruction quality. Experiments on single-object ROI scenes demonstrate a 2.96 dB PSNR gain, a 17% reduction in model size, and accelerated training—achieving superior overall performance compared to state-of-the-art 3DGS approaches.
📝 Abstract
We tackle the challenge of efficiently reconstructing 3D scenes with high detail on objects of interest. Existing 3D Gaussian Splatting (3DGS) methods allocate resources uniformly across the scene, limiting fine detail to Regions Of Interest (ROIs) and leading to inflated model size. We propose ROI-GS, an object-aware framework that enhances local details through object-guided camera selection, targeted Object training, and seamless integration of high-fidelity object of interest reconstructions into the global scene. Our method prioritizes higher resolution details on chosen objects while maintaining real-time performance. Experiments show that ROI-GS significantly improves local quality (up to 2.96 dB PSNR), while reducing overall model size by $approx 17%$ of baseline and achieving faster training for a scene with a single object of interest, outperforming existing methods.