๐ค AI Summary
Counterfactual explanations (CEs) often suffer from reduced interpretability and practical utility due to redundant feature modifications. To address this issue, this work proposes the first general-purpose, modular framework for CE sparsification, compatible with any built-in or external CE generator and integrating multiple sparsification strategies alongside visualization tools. Built on pandas, scikit-learn, and PyTorch, the framework offers a standardized API that supports the entire pipelineโfrom data preprocessing and model integration to sparsification and result analysis. Experimental results demonstrate that the proposed approach significantly enhances explanation conciseness across diverse CE generators, reducing the number of modified features by up to 50% while preserving explanatory validity.
๐ Abstract
Counterfactual explanation (CE) is an important domain within post-hoc explainability. However, the explanations generated by most CE generators are often highly redundant. This work introduces an open-source Python library xai-cola, which provides an end-to-end pipeline for sparsifying CEs produced by arbitrary generators, reducing superfluous feature changes while preserving their validity. It offers a documented API that takes as input raw tabular data in pandas DataFrame form, a preprocessing object (for standardization and encoding), and a trained scikit-learn or PyTorch model. On this basis, users can either employ the built-in or externally imported CE generators. The library also implements several sparsification policies and includes visualization routines for analysing and comparing sparsified counterfactuals. xai-cola is released under the MIT license and can be installed from PyPI. Empirical experiments indicate that xai-cola produces sparser counterfactuals across several CE generators, reducing the number of modified features by up to 50% in our setting. The source code is available at https://github.com/understanding-ml/COLA.