🤖 AI Summary
This work addresses the limitations of existing example-based AI explanation methods, which struggle to faithfully reflect model decision changes when feature discrepancies are substantial, thereby compromising both fidelity and interpretability. Drawing inspiration from the “comparable instances” approach in real estate valuation, the authors propose a novel comparable-example explanation mechanism augmented with counterfactual trajectory adjustment. By constructing monotonic, attribute-wise counterfactual paths in the feature space, the method precisely traces the evolution of model decisions from a comparable instance to the target sample. This approach uniquely enables controlled manipulation of feature variations and corresponding model outputs within example-based explanations. Empirical results demonstrate that it significantly outperforms baseline methods—including linear regression, linear adjustment, and unadjusted approaches—across multiple dimensions: explanation fidelity, precision, user judgment accuracy, and uncertainty quantification.
📝 Abstract
Explaining with examples is an intuitive way to justify AI decisions. However, it is challenging to understand how a decision value should change relative to the examples with many features differing by large amounts. We draw from real estate valuation that uses Comparables-examples with known values for comparison. Estimates are made more accurate by hypothetically adjusting the attributes of each Comparable and correspondingly changing the value based on factors. We propose Comparables XAI for relatable example-based explanations of AI with Trace adjustments that trace counterfactual changes from each Comparable to the Subject, one attribute at a time, monotonically along the AI feature space. In modelling and user studies, Trace-adjusted Comparables achieved the highest XAI faithfulness and precision, user accuracy, and narrowest uncertainty bounds compared to linear regression, linearly adjusted Comparables, or unadjusted Comparables. This work contributes a new analytical basis for using example-based explanations to improve user understanding of AI decisions.