Measuring Error Alignment for Decision-Making Systems

📅 2024-09-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How to assess AI decision-making alignment with human values to enhance trustworthiness? This paper proposes a novel behavior-based value alignment paradigm grounded in erroneous decisions. It introduces two original behavioral alignment metrics: *misclassification consistency* and *class-level error similarity*, which quantify the similarity between AI and human judgments—specifically, in misclassified instances and error class distributions—to indirectly measure value alignment. The method integrates statistical modeling of error patterns with cross-system divergence measures (e.g., Jensen–Shannon divergence, Kendall rank correlation) to compare error distributions. Experiments demonstrate that both metrics exhibit strong correlation with representation-level alignment (ρ > 0.85) and complement existing behavioral metrics positively. Crucially, they overcome the low sensitivity limitation of conventional approaches, enabling lightweight, scalable, and empirically grounded evaluation of value alignment.

Technology Category

Application Category

📝 Abstract
Given that AI systems are set to play a pivotal role in future decision-making processes, their trustworthiness and reliability are of critical concern. Due to their scale and complexity, modern AI systems resist direct interpretation, and alternative ways are needed to establish trust in those systems, and determine how well they align with human values. We argue that good measures of the information processing similarities between AI and humans, may be able to achieve these same ends. While Representational alignment (RA) approaches measure similarity between the internal states of two systems, the associated data can be expensive and difficult to collect for human systems. In contrast, Behavioural alignment (BA) comparisons are cheaper and easier, but questions remain as to their sensitivity and reliability. We propose two new behavioural alignment metrics misclassification agreement which measures the similarity between the errors of two systems on the same instances, and class-level error similarity which measures the similarity between the error distributions of two systems. We show that our metrics correlate well with RA metrics, and provide complementary information to another BA metric, within a range of domains, and set the scene for a new approach to value alignment.
Problem

Research questions and friction points this paper is trying to address.

AI Ethics
Decision-making Consistency
Trust and Reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Behavioral Alignment
Artificial Intelligence Ethics
Representation Alignment Consistency
🔎 Similar Papers
No similar papers found.
B
Binxia Xu
Dept. of Information Studies, University College London, United Kingdom
Antonis Bikakis
Antonis Bikakis
Professor of Artificial Intelligence, University College London
Artificial IntelligenceKnowledge RepresentationNon-monotonic ReasoningComputational
D
Daniel Onah
Dept. of Information Studies, University College London, United Kingdom
A
A. Vlachidis
Dept. of Information Studies, University College London, United Kingdom
Luke Dickens
Luke Dickens
Associate Professor in Machine Learning, University College London
Machine LearningReinforcement LearningComputational Neuroscience