🤖 AI Summary
In metal additive manufacturing, quality assessment remains heavily reliant on expert experience, while existing AI-based approaches lack interpretability. Method: This paper proposes the first explainable quality assessment framework integrating vision-language models (VLMs) with domain knowledge. It distills metallurgical expertise from academic literature into a VLM and employs attention mechanisms to enable semantic-level defect reasoning and natural-language explanation generation. Contribution/Results: Evaluated on 24 single-bead laser-wire deposition samples, the framework achieves significant improvements over generic VLMs in both assessment accuracy (+12.7% F1-score) and explanation consistency (+18.3% BLEU-4). By grounding visual reasoning in domain semantics and generating human-readable justifications, it enhances transparency and operational utility—directly addressing industry’s dual requirements for interpretability and practicality.
📝 Abstract
Image-based quality assessment (QA) in additive manufacturing (AM) often relies heavily on the expertise and constant attention of skilled human operators. While machine learning and deep learning methods have been introduced to assist in this task, they typically provide black-box outputs without interpretable justifications, limiting their trust and adoption in real-world settings. In this work, we introduce a novel QA-VLM framework that leverages the attention mechanisms and reasoning capabilities of vision-language models (VLMs), enriched with application-specific knowledge distilled from peer-reviewed journal articles, to generate human-interpretable quality assessments. Evaluated on 24 single-bead samples produced by laser wire direct energy deposition (DED-LW), our framework demonstrates higher validity and consistency in explanation quality than off-the-shelf VLMs. These results highlight the potential of our approach to enable trustworthy, interpretable quality assessment in AM applications.