Readable Twins of Unreadable Models

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning models (DLMs) suffer from limited interpretability, hindering trust and deployment in high-stakes domains. Method: This paper introduces the “Readable Twin” framework—the first to adapt digital twin principles to eXplainable Deep Learning (XDL)—by constructing a lightweight, human-understandable proxy based on an Imprecise Information Flow Model (IIFM). We propose a systematic DLM-to-IIFM translation pipeline integrating model distillation, behavioral trajectory abstraction, and imprecise information flow modeling, balancing fidelity and interpretability. Contribution/Results: Evaluated on MNIST image classification, the generated IIFM twin significantly enhances decision transparency while preserving the original model’s classification logic with high consistency. Our core contribution is the formal definition and implementation of the first XDL-specific digital twin paradigm, enabling structured, traceable, and verifiable explanations for black-box models—thereby advancing both theoretical foundations and practical interpretability tools in XAI.

Technology Category

Application Category

📝 Abstract
Creating responsible artificial intelligence (AI) systems is an important issue in contemporary research and development of works on AI. One of the characteristics of responsible AI systems is their explainability. In the paper, we are interested in explainable deep learning (XDL) systems. On the basis of the creation of digital twins of physical objects, we introduce the idea of creating readable twins (in the form of imprecise information flow models) for unreadable deep learning models. The complete procedure for switching from the deep learning model (DLM) to the imprecise information flow model (IIFM) is presented. The proposed approach is illustrated with an example of a deep learning classification model for image recognition of handwritten digits from the MNIST data set.
Problem

Research questions and friction points this paper is trying to address.

Creating explainable deep learning systems for responsible AI
Developing readable twins for unreadable deep learning models
Transforming deep learning models into imprecise information flow models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Creating readable twins for unreadable models
Using imprecise information flow models
Transition from DLM to IIFM for explainability
🔎 Similar Papers
No similar papers found.
Piotr Kulicki
Piotr Kulicki
John Paul II Catholic University of Lublin
logicartificial intelligenceformal ontology
M
Michal Kalisz
The John Paul II Catholic University of Lublin, Poland
M
Maciej Stanislawski
University of Warmia and Mazury, Olsztyn, Poland
J
Jaromir Sarzy'nski
University of Rzeszów, Poland