RLGS: Reinforcement Learning-Based Adaptive Hyperparameter Tuning for Gaussian Splatting

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual hyperparameter tuning in 3D Gaussian Splatting (3DGS) leads to inconsistent reconstructions and performance bottlenecks. To address this, we propose the first reinforcement learning–based framework for adaptive hyperparameter optimization in 3DGS training—a model-agnostic, plug-and-play solution. Our method employs a lightweight policy network to dynamically adjust the learning rate and densification threshold in real time, with policy updates driven by closed-loop rendering feedback. Crucially, it requires no architectural modifications to the base 3DGS model and integrates seamlessly into existing pipelines. On the Tanks and Temple benchmark, our approach improves PSNR by 0.7 dB for Taming-3DGS. Extensive evaluation across multiple state-of-the-art 3DGS variants and diverse datasets demonstrates strong generalization and robustness—particularly when baseline performance has saturated, where consistent gains persist.

Technology Category

Application Category

📝 Abstract
Hyperparameter tuning in 3D Gaussian Splatting (3DGS) is a labor-intensive and expert-driven process, often resulting in inconsistent reconstructions and suboptimal results. We propose RLGS, a plug-and-play reinforcement learning framework for adaptive hyperparameter tuning in 3DGS through lightweight policy modules, dynamically adjusting critical hyperparameters such as learning rates and densification thresholds. The framework is model-agnostic and seamlessly integrates into existing 3DGS pipelines without architectural modifications. We demonstrate its generalization ability across multiple state-of-the-art 3DGS variants, including Taming-3DGS and 3DGS-MCMC, and validate its robustness across diverse datasets. RLGS consistently enhances rendering quality. For example, it improves Taming-3DGS by 0.7dB PSNR on the Tanks and Temple (TNT) dataset, under a fixed Gaussian budget, and continues to yield gains even when baseline performance saturates. Our results suggest that RLGS provides an effective and general solution for automating hyperparameter tuning in 3DGS training, bridging a gap in applying reinforcement learning to 3DGS.
Problem

Research questions and friction points this paper is trying to address.

Automates hyperparameter tuning in 3D Gaussian Splatting
Dynamically adjusts learning rates and densification thresholds
Enhances rendering quality across diverse datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for adaptive hyperparameter tuning
Lightweight policy modules adjust critical parameters
Model-agnostic framework integrates into existing pipelines
🔎 Similar Papers
No similar papers found.