🤖 AI Summary
This work addresses the low efficiency and poor view consistency in text-to-multi-view image generation. We propose an efficient synthesis method based on a spatiotemporal-angular joint latent space, which jointly models appearance and viewpoint variations, enabling single-pass inference to generate 32 geometrically consistent views in approximately five seconds—significantly reducing inference latency. Our approach employs a two-stage training strategy: first, a variational autoencoder learns disentangled geometric-appearance representations from multi-view data; second, a diffusion model is leveraged for high-fidelity, text-conditioned generation. Experiments demonstrate that our method outperforms existing approaches in both view consistency and inference speed, while maintaining superior image quality and text-image alignment. By introducing a compact, semantically rich intermediate representation, our framework establishes an efficient and reliable paradigm for text-driven 3D asset generation.
📝 Abstract
Generating synthetic multi-view images from a text prompt is an essential bridge to generating synthetic 3D assets. In this work, we introduce RapidMV, a novel text-to-multi-view generative model that can produce 32 multi-view synthetic images in just around 5 seconds. In essence, we propose a novel spatio-angular latent space, encoding both the spatial appearance and angular viewpoint deviations into a single latent for improved efficiency and multi-view consistency. We achieve effective training of RapidMV by strategically decomposing our training process into multiple steps. We demonstrate that RapidMV outperforms existing methods in terms of consistency and latency, with competitive quality and text-image alignment.