🤖 AI Summary
Image classification models often yield overconfident predictions due to architectural limitations, dataset bias, and domain shift, undermining prediction reliability. To address this, we propose an interpretable framework integrating generalized polynomial chaos expansion (gPCE) with Sobol’ global sensitivity analysis. Domain shift is modeled as a stochastic input, enabling quantification of how input distribution perturbations propagate to output uncertainty and identification of dominant input parameters governing predictive uncertainty. Our method combines fine-tuned ResNet18 with industrial-grade classification models and is validated on BMW Group’s real-world welding defect and logo classification tasks. Results demonstrate significantly improved uncertainty attribution, precise localization of factors critical to prediction robustness, and establishment of a novel paradigm for diagnosing and optimizing high-assurance image classification systems.
📝 Abstract
Integrating advanced communication protocols in production has accelerated the adoption of data-driven predictive quality methods, notably machine learning (ML) models. However, ML models in image classification often face significant uncertainties arising from model, data, and domain shifts. These uncertainties lead to overconfidence in the classification model's output. To better understand these models, sensitivity analysis can help to analyze the relative influence of input parameters on the output. This work investigates the sensitivity of image classification models used for predictive quality. We propose modeling the distributional domain shifts of inputs with random variables and quantifying their impact on the model's outputs using Sobol indices computed via generalized polynomial chaos (GPC). This approach is validated through a case study involving a welding defect classification problem, utilizing a fine-tuned ResNet18 model and an emblem classification model used in BMW Group production facilities.