🤖 AI Summary
Existing 3D Gaussian Splatting GANs rely on camera pose conditioning to stabilize training, yet this induces identity drift and cross-view distortions; removing it often leads to training collapse. This paper proposes the first pose-conditioning-free, stable 3D Gaussian GAN framework. We design a lightweight generator architecture, introduce multi-view consistency regularization, and reformulate the conditional loss function. Leveraging a newly constructed high-fidelity head dataset derived from FFHQ, our method achieves high-resolution (up to 2048²), geometry- and identity-consistent novel-view synthesis in arbitrary poses. Quantitatively, our approach significantly improves 3D consistency while preserving rendering fidelity, achieving state-of-the-art FID scores and substantially outperforming pose-conditioned baselines.
📝 Abstract
Recently, 3D GANs based on 3D Gaussian splatting have been proposed for high quality synthesis of human heads. However, existing methods stabilize training and enhance rendering quality from steep viewpoints by conditioning the random latent vector on the current camera position. This compromises 3D consistency, as we observe significant identity changes when re-synthesizing the 3D head with each camera shift. Conversely, fixing the camera to a single viewpoint yields high-quality renderings for that perspective but results in poor performance for novel views. Removing view-conditioning typically destabilizes GAN training, often causing the training to collapse. In response to these challenges, we introduce CGS-GAN, a novel 3D Gaussian Splatting GAN framework that enables stable training and high-quality 3D-consistent synthesis of human heads without relying on view-conditioning. To ensure training stability, we introduce a multi-view regularization technique that enhances generator convergence with minimal computational overhead. Additionally, we adapt the conditional loss used in existing 3D Gaussian splatting GANs and propose a generator architecture designed to not only stabilize training but also facilitate efficient rendering and straightforward scaling, enabling output resolutions up to $2048^2$. To evaluate the capabilities of CGS-GAN, we curate a new dataset derived from FFHQ. This dataset enables very high resolutions, focuses on larger portions of the human head, reduces view-dependent artifacts for improved 3D consistency, and excludes images where subjects are obscured by hands or other objects. As a result, our approach achieves very high rendering quality, supported by competitive FID scores, while ensuring consistent 3D scene generation. Check our our project page here: https://fraunhoferhhi.github.io/cgs-gan/