🤖 AI Summary
Existing 3D generative models suffer from inconsistent object orientations across categories due to orientation-variant training data, severely hindering downstream applications. This work formally introduces “orientation-aligned 3D generation” as a new task and establishes Objaverse-OA—the first large-scale, fine-grained orientation-annotated benchmark (14,832 models across 1,008 categories). Methodologically, we integrate multi-view diffusion with a 3D variational autoencoder framework, incorporating orientation-aware fine-tuning and analysis-by-synthesis zero-shot orientation estimation, while enabling arrow-driven interactive rotation control. Our model generates orientation-consistent 3D outputs end-to-end across categories without post-hoc alignment. Experiments demonstrate significant improvements over conventional alignment-based post-processing methods, strong generalization to unseen categories, and precise orientation controllability—establishing a reliable geometric prior for single-image-to-3D generation.
📝 Abstract
Humans intuitively perceive object shape and orientation from a single image, guided by strong priors about canonical poses. However, existing 3D generative models often produce misaligned results due to inconsistent training data, limiting their usability in downstream tasks. To address this gap, we introduce the task of orientation-aligned 3D object generation: producing 3D objects from single images with consistent orientations across categories. To facilitate this, we construct Objaverse-OA, a dataset of 14,832 orientation-aligned 3D models spanning 1,008 categories. Leveraging Objaverse-OA, we fine-tune two representative 3D generative models based on multi-view diffusion and 3D variational autoencoder frameworks to produce aligned objects that generalize well to unseen objects across various categories. Experimental results demonstrate the superiority of our method over post-hoc alignment approaches. Furthermore, we showcase downstream applications enabled by our aligned object generation, including zero-shot object orientation estimation via analysis-by-synthesis and efficient arrow-based object rotation manipulation.