xai-cola: A Python library for sparsifying counterfactual explanations

๐Ÿ“… 2026-02-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Counterfactual explanations (CEs) often suffer from reduced interpretability and practical utility due to redundant feature modifications. To address this issue, this work proposes the first general-purpose, modular framework for CE sparsification, compatible with any built-in or external CE generator and integrating multiple sparsification strategies alongside visualization tools. Built on pandas, scikit-learn, and PyTorch, the framework offers a standardized API that supports the entire pipelineโ€”from data preprocessing and model integration to sparsification and result analysis. Experimental results demonstrate that the proposed approach significantly enhances explanation conciseness across diverse CE generators, reducing the number of modified features by up to 50% while preserving explanatory validity.

Technology Category

Application Category

๐Ÿ“ Abstract
Counterfactual explanation (CE) is an important domain within post-hoc explainability. However, the explanations generated by most CE generators are often highly redundant. This work introduces an open-source Python library xai-cola, which provides an end-to-end pipeline for sparsifying CEs produced by arbitrary generators, reducing superfluous feature changes while preserving their validity. It offers a documented API that takes as input raw tabular data in pandas DataFrame form, a preprocessing object (for standardization and encoding), and a trained scikit-learn or PyTorch model. On this basis, users can either employ the built-in or externally imported CE generators. The library also implements several sparsification policies and includes visualization routines for analysing and comparing sparsified counterfactuals. xai-cola is released under the MIT license and can be installed from PyPI. Empirical experiments indicate that xai-cola produces sparser counterfactuals across several CE generators, reducing the number of modified features by up to 50% in our setting. The source code is available at https://github.com/understanding-ml/COLA.
Problem

Research questions and friction points this paper is trying to address.

counterfactual explanation
sparsification
explainable AI
redundancy
feature changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

counterfactual explanation
sparsification
explainable AI
feature reduction
post-hoc interpretability
๐Ÿ”Ž Similar Papers
No similar papers found.