🤖 AI Summary
Amid growing concerns over insufficient evaluation metrics for AI system trustworthiness and difficulties in calibrating user trust in the large language model era, this paper proposes the Trustworthiness Calibration Maturity Model (TCMM)—the first multidimensional maturity framework explicitly designed for user trust calibration. TCMM establishes a structured, measurable, communicable, and evolvable metric system across five dimensions: performance characterization, bias and robustness, transparency, safety and security, and usability—thereby enabling systematic quantification and articulation of abstract trustworthiness. Through maturity modeling, multidimensional trust assessment, and governance framework design, TCMM delivers theoretical formalization and standardized definitions. Empirical validation on two AI system–task pairs demonstrates its practical efficacy, significantly improving users’ accuracy in perceiving AI capability boundaries and enhancing usage rationality.
📝 Abstract
The proliferation of powerful AI capabilities and systems necessitates a commensurate focus on user trust. We introduce the Trust Calibration Maturity Model (TCMM) to capture and communicate the maturity of AI system trustworthiness. The TCMM scores maturity along 5 dimensions that drive user trust: Performance Characterization, Bias&Robustness Quantification, Transparency, Safety&Security, and Usability. Information captured in the TCMM can be presented along with system performance information to help a user to appropriately calibrate trust, to compare requirements with current states of development, and to clarify trustworthiness needs. We present the TCMM and demonstrate its use on two AI system-target task pairs.