🤖 AI Summary
Existing group counterfactual methods struggle to generalize to new members, rely on strong model assumptions, or distort the geometric structure of the group. This work introduces optimal transport theory into group counterfactual generation for the first time, learning an explicit optimal transport map that directly transforms any group instance into its counterfactual without re-optimization for new members. The approach preserves the group’s geometric structure while minimizing total transport cost and, under linear classifiers, yields a rigorously derived convex optimization formulation (QP/QCQP). Experiments demonstrate that the method significantly outperforms baselines in generalization, geometric fidelity, and transport cost, maintaining advantages even when the linear assumption does not hold.
📝 Abstract
Group counterfactual explanations find a set of counterfactual instances to explain a group of input instances contrastively. However, existing methods either (i) optimize counterfactuals only for a fixed group and do not generalize to new group members, (ii) strictly rely on strong model assumptions (e.g., linearity) for tractability or/and (iii) poorly control the counterfactual group geometry distortion. We instead learn an explicit optimal transport map that sends any group instance to its counterfactual without re-optimization, minimizing the group's total transport cost. This enables generalization with fewer parameters, making it easier to interpret the common actionable recourse. For linear classifiers, we prove that functions representing group counterfactuals are derived via mathematical optimization, identifying the underlying convex optimization type (QP, QCQP, ...). Experiments show that they accurately generalize, preserve group geometry and incur only negligible additional transport cost compared to baseline methods. If model linearity cannot be exploited, our approach also significantly outperforms the baselines.