No Parameters, No Problem: 3D Gaussian Splatting without Camera Intrinsics and Extrinsics

๐Ÿ“… 2025-02-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
3D Gaussian Splatting (3DGS) achieves state-of-the-art performance in novel view synthesis but critically relies on pre-calibrated camera intrinsics (e.g., focal length) and extrinsics (poses). Existing pose-free optimization methods still require intrinsic priors. This paper introduces the first fully calibration-free, end-to-end 3DGS training framework: given only an uncalibrated image collection, it jointly optimizes scene geometry, appearance, and *all* camera parametersโ€”both intrinsics and extrinsics. Key contributions include: (1) a theoretical derivation of differentiable gradients for intrinsic parameters, enabling full-parameter joint optimization; (2) a trajectory-guided hybrid Gaussian kernel strategy that adaptively scales Gaussian extents to improve multi-view consistency; and (3) a unified objective combining reprojection error minimization with adaptive Gaussian pruning. Evaluated on public and synthetic benchmarks, our method achieves SOTA reconstruction quality and novel view synthesis accuracy while drastically reducing dependence on prior camera calibration.

Technology Category

Application Category

๐Ÿ“ Abstract
While 3D Gaussian Splatting (3DGS) has made significant progress in scene reconstruction and novel view synthesis, it still heavily relies on accurately pre-computed camera intrinsics and extrinsics, such as focal length and camera poses. In order to mitigate this dependency, the previous efforts have focused on optimizing 3DGS without the need for camera poses, yet camera intrinsics remain necessary. To further loose the requirement, we propose a joint optimization method to train 3DGS from an image collection without requiring either camera intrinsics or extrinsics. To achieve this goal, we introduce several key improvements during the joint training of 3DGS. We theoretically derive the gradient of the camera intrinsics, allowing the camera intrinsics to be optimized simultaneously during training. Moreover, we integrate global track information and select the Gaussian kernels associated with each track, which will be trained and automatically rescaled to an infinitesimally small size, closely approximating surface points, and focusing on enforcing multi-view consistency and minimizing reprojection errors, while the remaining kernels continue to serve their original roles. This hybrid training strategy nicely unifies the camera parameters estimation and 3DGS training. Extensive evaluations demonstrate that the proposed method achieves state-of-the-art (SOTA) performance on both public and synthetic datasets.
Problem

Research questions and friction points this paper is trying to address.

Eliminates need for camera intrinsics
Removes dependency on camera extrinsics
Optimizes 3D Gaussian Splatting jointly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint optimization without camera parameters
Gradient derivation for intrinsic optimization
Hybrid training with global track integration
๐Ÿ”Ž Similar Papers
No similar papers found.