Classifying Novel 3D-Printed Objects without Retraining: Towards Post-Production Automation in Additive Manufacturing

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of classifying 3D-printed objects in industrial additive manufacturing, where reliance on manual inspection and frequent introduction of new object types render repeated model retraining impractical, thereby hindering post-processing automation. To tackle this, the authors introduce ThingiPrint, a novel dataset combining CAD models with real-world printed images, and propose a prototype-based classification method that eliminates the need for retraining. Leveraging a pre-trained vision model, the approach integrates contrastive fine-tuning with rotation-invariant representation learning to achieve high-accuracy recognition of previously unseen printed objects. Experiments on the ThingiPrint benchmark demonstrate that the proposed method significantly outperforms standard pre-trained models, markedly enhancing generalization capability and deployment feasibility, thus advancing the practical automation of post-processing workflows.

Technology Category

Application Category

📝 Abstract
Reliable classification of 3D-printed objects is essential for automating post-production workflows in industrial additive manufacturing. Despite extensive automation in other stages of the printing pipeline, this task still relies heavily on manual inspection, as the set of objects to be classified can change daily, making frequent model retraining impractical. Automating the identification step is therefore critical for improving operational efficiency. A vision model that could classify any set of objects by utilizing their corresponding CAD models and avoiding retraining would be highly beneficial in this setting. To enable systematic evaluation of vision models on this task, we introduce ThingiPrint, a new publicly available dataset that pairs CAD models with real photographs of their 3D-printed counterparts. Using ThingiPrint, we benchmark a range of existing vision models on the task of 3D-printed object classification. We additionally show that contrastive fine-tuning with a rotation-invariant objective allows effective prototype-based classification of previously unseen 3D-printed objects. By relying solely on the available CAD models, this avoids the need for retraining when new objects are introduced. Experiments show that this approach outperforms standard pretrained baselines, suggesting improved generalization and practical relevance for real-world use.
Problem

Research questions and friction points this paper is trying to address.

3D-printed object classification
post-production automation
additive manufacturing
zero-shot classification
CAD-based recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

zero-shot classification
CAD-based vision
rotation-invariant contrastive learning
additive manufacturing automation
ThingiPrint dataset
🔎 Similar Papers
No similar papers found.
F
Fanis Mathioulakis
KU Leuven, Belgium.
Gorjan Radevski
Gorjan Radevski
Research at NEC Laboratories Europe | Postdoc at KU Leuven
deep learningmachine learningnatural language processingcomputer vision
S
Silke GC Cleuren
Materialise, Belgium.
M
Michel Janssens
Materialise, Belgium.
B
Brecht Das
Materialise, Belgium.
K
Koen Schauwaert
Iristick, Belgium.
Tinne Tuytelaars
Tinne Tuytelaars
KU Leuven - PSI, Belgium
computer visioncontinual learning