Snap-Snap: Taking Two Images to Reconstruct 3D Human Gaussians in Milliseconds

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of reconstructing high-fidelity 3D human avatars from only two input images—front and back views. We propose a dual-view-driven Gaussian reconstruction method that employs a lightweight geometric prediction network jointly optimized for point cloud completion and color enhancement, ensuring 3D consistency and fine surface detail under sparse input conditions. The model directly outputs a renderable 3D Gaussian splatting representation—bypassing explicit mesh generation. Trained on large-scale human datasets, it robustly handles low-resolution, low-quality mobile-captured images. On a single RTX 4090 GPU, full-body reconstruction from 1024×1024 inputs completes in 190 ms, achieving state-of-the-art performance on THuman2.0 and cross-domain benchmarks. Our approach significantly lowers the barrier to personal 3D digital human creation, enabling the first end-to-end, efficient reconstruction pipeline from just two input images to a fully renderable 3D Gaussian model.

Technology Category

Application Category

📝 Abstract
Reconstructing 3D human bodies from sparse views has been an appealing topic, which is crucial to broader the related applications. In this paper, we propose a quite challenging but valuable task to reconstruct the human body from only two images, i.e., the front and back view, which can largely lower the barrier for users to create their own 3D digital humans. The main challenges lie in the difficulty of building 3D consistency and recovering missing information from the highly sparse input. We redesign a geometry reconstruction model based on foundation reconstruction models to predict consistent point clouds even input images have scarce overlaps with extensive human data training. Furthermore, an enhancement algorithm is applied to supplement the missing color information, and then the complete human point clouds with colors can be obtained, which are directly transformed into 3D Gaussians for better rendering quality. Experiments show that our method can reconstruct the entire human in 190 ms on a single NVIDIA RTX 4090, with two images at a resolution of 1024x1024, demonstrating state-of-the-art performance on the THuman2.0 and cross-domain datasets. Additionally, our method can complete human reconstruction even with images captured by low-cost mobile devices, reducing the requirements for data collection. Demos and code are available at https://hustvl.github.io/Snap-Snap/.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing 3D human bodies from only two images
Building 3D consistency with highly sparse input views
Recovering missing geometric and color information efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses two front-back images for reconstruction
Redesigned geometry model predicts consistent point clouds
Transforms colored point clouds into 3D Gaussians
🔎 Similar Papers
No similar papers found.