π€ AI Summary
Existing methods for generating counterfactual explanations of multivariate time series often yield samples lacking plausibility and interpretability, undermining model transparency. To address this, we propose GenFactsβa framework built upon a class-discriminative variational autoencoder, integrating contrastive learning, classification consistency constraints, prototype-based initialization, and authenticity regularization. These components jointly ensure minimal perturbation, semantic plausibility, and decision-consistent counterfactuals. Our key innovations are a prototype-guided optimization mechanism and a multi-objective co-training strategy, which synergistically enhance both authenticity and interpretability. Evaluated on radar gesture and handwritten trajectory datasets, GenFacts improves plausibility by 18.7% over state-of-the-art methods; human evaluation further confirms its superior interpretability, achieving the highest score among all baselines.
π Abstract
Counterfactual explanations aim to enhance model transparency by showing how inputs can be minimally altered to change predictions. For multivariate time series, existing methods often generate counterfactuals that are invalid, implausible, or unintuitive. We introduce GenFacts, a generative framework based on a class-discriminative variational autoencoder. It integrates contrastive and classification-consistency objectives, prototype-based initialization, and realism-constrained optimization. We evaluate GenFacts on radar gesture data as an industrial use case and handwritten letter trajectories as an intuitive benchmark. Across both datasets, GenFacts outperforms state-of-the-art baselines in plausibility (+18.7%) and achieves the highest interpretability scores in a human study. These results highlight that plausibility and user-centered interpretability, rather than sparsity alone, are key to actionable counterfactuals in time series data.