Improving Explanations: Applying the Feature Understandability Scale for Cost-Sensitive Feature Selection

πŸ“… 2026-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses a critical limitation in existing natural language explanation methods for tabular data: their neglect of users’ inherent understandability of input features, which undermines explanation effectiveness. To bridge this gap, the work introduces, for the first time, a feature understandability scale into a cost-sensitive feature selection framework and proposes a joint optimization algorithm that simultaneously maximizes model classification accuracy and the understandability of selected features. Experimental results on two real-world datasets demonstrate that the proposed approach significantly enhances the surface understandability of generated explanations while maintaining high predictive performance. These findings validate the feasibility of co-optimizing accuracy and understandability, marking a pivotal shift in explainable AIβ€”from design-driven paradigms toward alignment with user cognition.
πŸ“ Abstract
With the growing pervasiveness of artificial intelligence, the ability to explain the inferences made by machine learning models has become increasingly important. Numerous techniques for model explainability have been proposed, with natural-language textual explanations among the most widely used approaches. When applied to tabular data, these explanations typically draw on input features to justify a given inference. Consequently, a user's ability to interpret the explanation depends on their understanding of the input features. To quantify this feature-level understanding, Rossberg et al. introduced the Feature Understandability Scale. Building on that work, this proof-of-concept study collects understandability scores across two datasets, proposes a co-optimisation methodology of understandability and accuracy and presents the resulting explanations alongside the model accuracies. This work contributes to the body of knowledge on model interpretability by design. It is found that accuracy and understandability can be successfully co-optimised while maintaining high classification performances. The resulting explanations are considered more understandable at face value. Further research will aim to confirm these findings through user evaluation.
Problem

Research questions and friction points this paper is trying to address.

explainability
feature understandability
cost-sensitive feature selection
interpretability
tabular data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature Understandability
Cost-Sensitive Feature Selection
Explainable AI
Interpretability by Design
Co-optimization
πŸ”Ž Similar Papers
No similar papers found.