đ€ AI Summary
This work addresses the systematic misjudgments of existing identifiability evaluation metricsâsuch as MCC, DCI, and RÂČâarising from mismatches between their implicit assumptions and either the true data-generating process or the encoderâs geometric structure. Through theoretical analysis and synthetic benchmarking, we develop a stress-testing framework and introduce, for the first time, a taxonomy that disentangles the roles of data-generation assumptions and encoder geometry in determining metric validity. This classification clarifies the failure mechanisms of current metrics under both classical and post-hoc identifiability settings. Accompanying this analysis, we release a reproducible evaluation suite that delineates the valid applicability domains of each metric, thereby providing reliable tools and principled guidelines for assessing identifiability in representation learning.
đ Abstract
Identifiability in representation learning is commonly evaluated using standard metrics (e.g., MCC, DCI, R^2) on synthetic benchmarks with known ground-truth factors. These metrics are assumed to reflect recovery up to the equivalence class guaranteed by identifiability theory. We show that this assumption holds only under specific structural conditions: each metric implicitly encodes assumptions about both the data-generating process (DGP) and the encoder. When these assumptions are violated, metrics become misspecified and can produce systematic false positives and false negatives. Such failures occur both within classical identifiability regimes and in post-hoc settings where identifiability is most needed. We introduce a taxonomy separating DGP assumptions from encoder geometry, use it to characterise the validity domains of existing metrics, and release an evaluation suite for reproducible stress testing and comparison.