🤖 AI Summary
This work addresses the challenge of classifying 3D-printed objects in industrial additive manufacturing, where reliance on manual inspection and frequent introduction of new object types render repeated model retraining impractical, thereby hindering post-processing automation. To tackle this, the authors introduce ThingiPrint, a novel dataset combining CAD models with real-world printed images, and propose a prototype-based classification method that eliminates the need for retraining. Leveraging a pre-trained vision model, the approach integrates contrastive fine-tuning with rotation-invariant representation learning to achieve high-accuracy recognition of previously unseen printed objects. Experiments on the ThingiPrint benchmark demonstrate that the proposed method significantly outperforms standard pre-trained models, markedly enhancing generalization capability and deployment feasibility, thus advancing the practical automation of post-processing workflows.
📝 Abstract
Reliable classification of 3D-printed objects is essential for automating post-production workflows in industrial additive manufacturing. Despite extensive automation in other stages of the printing pipeline, this task still relies heavily on manual inspection, as the set of objects to be classified can change daily, making frequent model retraining impractical. Automating the identification step is therefore critical for improving operational efficiency. A vision model that could classify any set of objects by utilizing their corresponding CAD models and avoiding retraining would be highly beneficial in this setting. To enable systematic evaluation of vision models on this task, we introduce ThingiPrint, a new publicly available dataset that pairs CAD models with real photographs of their 3D-printed counterparts. Using ThingiPrint, we benchmark a range of existing vision models on the task of 3D-printed object classification. We additionally show that contrastive fine-tuning with a rotation-invariant objective allows effective prototype-based classification of previously unseen 3D-printed objects. By relying solely on the available CAD models, this avoids the need for retraining when new objects are introduced. Experiments show that this approach outperforms standard pretrained baselines, suggesting improved generalization and practical relevance for real-world use.