Who Guards the Guardians? The Challenges of Evaluating Identifiability of Learned Representations

📅 2026-02-27
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
This work addresses the systematic misjudgments of existing identifiability evaluation metrics—such as MCC, DCI, and RÂČ—arising from mismatches between their implicit assumptions and either the true data-generating process or the encoder’s geometric structure. Through theoretical analysis and synthetic benchmarking, we develop a stress-testing framework and introduce, for the first time, a taxonomy that disentangles the roles of data-generation assumptions and encoder geometry in determining metric validity. This classification clarifies the failure mechanisms of current metrics under both classical and post-hoc identifiability settings. Accompanying this analysis, we release a reproducible evaluation suite that delineates the valid applicability domains of each metric, thereby providing reliable tools and principled guidelines for assessing identifiability in representation learning.

Technology Category

Application Category

📝 Abstract
Identifiability in representation learning is commonly evaluated using standard metrics (e.g., MCC, DCI, R^2) on synthetic benchmarks with known ground-truth factors. These metrics are assumed to reflect recovery up to the equivalence class guaranteed by identifiability theory. We show that this assumption holds only under specific structural conditions: each metric implicitly encodes assumptions about both the data-generating process (DGP) and the encoder. When these assumptions are violated, metrics become misspecified and can produce systematic false positives and false negatives. Such failures occur both within classical identifiability regimes and in post-hoc settings where identifiability is most needed. We introduce a taxonomy separating DGP assumptions from encoder geometry, use it to characterise the validity domains of existing metrics, and release an evaluation suite for reproducible stress testing and comparison.
Problem

Research questions and friction points this paper is trying to address.

identifiability
representation learning
evaluation metrics
data-generating process
encoder geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

identifiability
representation learning
evaluation metrics
data-generating process
encoder geometry
🔎 Similar Papers