Trustworthiness Preservation by Copies of Machine Learning Systems

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of verifying trustworthiness preservation across machine learning “copies”—e.g., variants of a model trained on different datasets or distinct models evaluated on the same dataset. We propose the first probabilistic query calculus framework, formally defining four trustworthiness transfer relations: Justifiably Trustworthy, Equally Trustworthy, Weakly Trustworthy, and Almost Trustworthy, and systematically characterizing their logical composition properties and behavioral equivalence conditions. Integrating probabilistic logic, formal verification, and trustworthy AI modeling, we develop a computable verification tool that supports (i) justification of trustworthiness for partial behaviors of copy systems, (ii) cross-copy trustworthiness comparison, and (iii) compositional trust reasoning. Our results establish, for the first time, provable, comparable, and compositional trustworthiness for ML copies—providing both theoretical foundations and practical verification capabilities for responsible AI.

Technology Category

Application Category

📝 Abstract
A common practice of ML systems development concerns the training of the same model under different data sets, and the use of the same (training and test) sets for different learning models. The first case is a desirable practice for identifying high quality and unbiased training conditions. The latter case coincides with the search for optimal models under a common dataset for training. These differently obtained systems have been considered akin to copies. In the quest for responsible AI, a legitimate but hardly investigated question is how to verify that trustworthiness is preserved by copies. In this paper we introduce a calculus to model and verify probabilistic complex queries over data and define four distinct notions: Justifiably, Equally, Weakly and Almost Trustworthy which can be checked analysing the (partial) behaviour of the copy with respect to its original. We provide a study of the relations between these notions of trustworthiness, and how they compose with each other and under logical operations. The aim is to offer a computational tool to check the trustworthiness of possibly complex systems copied from an original whose behavour is known.
Problem

Research questions and friction points this paper is trying to address.

Verify trustworthiness preservation in ML system copies
Model probabilistic queries for trustworthiness verification
Define and analyze four distinct trustworthiness notions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Calculus for probabilistic complex queries verification
Four distinct trustworthiness notions analysis
Computational tool for copied systems trustworthiness
🔎 Similar Papers
No similar papers found.
L
Leonardo Ceragioli
LUCI Lab, Department of Philosophy, Universit`a degli Studi di Milano
Giuseppe Primiero
Giuseppe Primiero
Department of Philosophy, University of Milan
LogicPhilosophy of Computer Science