🤖 AI Summary
Objaverse++ exhibits significant quality heterogeneity, hindering the performance of 3D generative models. To address this, we propose a multi-dimensional, generation-oriented quality annotation paradigm and construct the first high-quality 3D subset: 10,000 models are manually annotated for aesthetic quality, texture fidelity, transparency, and other generation-relevant attributes; a lightweight neural network is trained to automatically assign quality scores to the full ~500K Objaverse models; and a dedicated image-to-3D generation evaluation framework—complemented by user studies—is introduced for rigorous validation. Experiments demonstrate that training exclusively on the top 2.5% high-quality data achieves superior efficiency and generation quality compared to using the entire low-quality dataset: loss convergence accelerates by 42%, and FID improves by 28%. We publicly release all 500K model quality labels to advance research in 3D vision and generative modeling.
📝 Abstract
This paper presents Objaverse++, a curated subset of Objaverse enhanced with detailed attribute annotations by human experts. Recent advances in 3D content generation have been driven by large-scale datasets such as Objaverse, which contains over 800,000 3D objects collected from the Internet. Although Objaverse represents the largest available 3D asset collection, its utility is limited by the predominance of low-quality models. To address this limitation, we manually annotate 10,000 3D objects with detailed attributes, including aesthetic quality scores, texture color classifications, multi-object composition flags, transparency characteristics, etc. Then, we trained a neural network capable of annotating the tags for the rest of the Objaverse dataset. Through experiments and a user study on generation results, we demonstrate that models pre-trained on our quality-focused subset achieve better performance than those trained on the larger dataset of Objaverse in image-to-3D generation tasks. In addition, by comparing multiple subsets of training data filtered by our tags, our results show that the higher the data quality, the faster the training loss converges. These findings suggest that careful curation and rich annotation can compensate for the raw dataset size, potentially offering a more efficient path to develop 3D generative models. We release our enhanced dataset of approximately 500,000 curated 3D models to facilitate further research on various downstream tasks in 3D computer vision. In the near future, we aim to extend our annotations to cover the entire Objaverse dataset.