🤖 AI Summary
Single-view 3D generation suffers from texture distortion and geometric misalignment due to visual inconsistency across views in diffusion-based outputs. To address this, we propose the first single-view 3D generation framework incorporating contrastive learning, featuring a novel Quantity-Aware Triplet Loss to enhance cross-view feature discriminability, coupled with a super-resolution module to improve fine-grained detail modeling. Our method integrates differentiable Gaussian splatting rendering, Score Distillation Sampling (SDS), perceptual loss, and contrastive learning within a unified end-to-end training pipeline. This synergy significantly improves both texture fidelity and geometric consistency across viewpoints. Extensive experiments demonstrate state-of-the-art performance on multiple benchmarks, outperforming existing methods in both qualitative and quantitative evaluations.
📝 Abstract
Creating 3D content from single-view images is a challenging problem that has attracted considerable attention in recent years. Current approaches typically utilize score distillation sampling (SDS) from pre-trained 2D diffusion models to generate multi-view 3D representations. Although some methods have made notable progress by balancing generation speed and model quality, their performance is often limited by the visual inconsistencies of the diffusion model outputs. In this work, we propose ContrastiveGaussian, which integrates contrastive learning into the generative process. By using a perceptual loss, we effectively differentiate between positive and negative samples, leveraging the visual inconsistencies to improve 3D generation quality. To further enhance sample differentiation and improve contrastive learning, we incorporate a super-resolution model and introduce another Quantity-Aware Triplet Loss to address varying sample distributions during training. Our experiments demonstrate that our approach achieves superior texture fidelity and improved geometric consistency.