🤖 AI Summary
This work addresses the lack of a clear definition of representation stability in existing representation learning methods, particularly the conflation of statistical consistency and structural alignment. It formally introduces, for the first time, the notions of statistical identifiability and structural identifiability, and proposes a model-agnostic ε-approximate identifiability framework that accommodates nonlinear decoders—such as those in masked autoencoders (MAE) and supervised models. The framework extends identifiability theory to intermediate-layer representations and integrates ICA-based post-processing to achieve effective disentanglement. Experiments demonstrate state-of-the-art disentanglement performance on synthetic data and successful separation of biological variation from batch effects in foundation models for cellular microscopy, substantially improving downstream generalization.
📝 Abstract
Representation learning models exhibit a surprising stability in their internal representations. Whereas most prior work treats this stability as a single property, we formalize it as two distinct concepts: statistical identifiability (consistency of representations across runs) and structural identifiability (alignment of representations with some unobserved ground truth). Recognizing that perfect pointwise identifiability is generally unrealistic for modern representation learning models, we propose new model-agnostic definitions of statistical and structural near-identifiability of representations up to some error tolerance $ε$. Leveraging these definitions, we prove a statistical $ε$-near-identifiability result for the representations of models with nonlinear decoders, generalizing existing identifiability theory beyond last-layer representations in e.g. generative pre-trained transformers (GPTs) to near-identifiability of the intermediate representations of a broad class of models including (masked) autoencoders (MAEs) and supervised learners. Although these weaker assumptions confer weaker identifiability, we show that independent components analysis (ICA) can resolve much of the remaining linear ambiguity for this class of models, and validate and measure our near-identifiability claims empirically. With additional assumptions on the data-generating process, statistical identifiability extends to structural identifiability, yielding a simple and practical recipe for disentanglement: ICA post-processing of latent representations. On synthetic benchmarks, this approach achieves state-of-the-art disentanglement using a vanilla autoencoder. With a foundation model-scale MAE for cell microscopy, it disentangles biological variation from technical batch effects, substantially improving downstream generalization.