π€ AI Summary
This work addresses the open problem of characterizing the computational complexity of finding local optima in contrastive learning. Focusing on representation learning with weighted triplet structures, the paper systematically analyzes the tractability boundary of local search under both discrete and continuous settings. It establishes, for the first time, that the problem is PLS-hard in the discrete case and CLS-hard in the continuous caseβimplying that no polynomial-time algorithm can guarantee convergence to a local optimum unless PLS β P or CLS β P. Furthermore, the authors construct explicit counterexamples demonstrating that even in one-dimensional embedding spaces, gradient-based methods may require exponential time to converge. These results reveal fundamental computational barriers inherent to local optimization in contrastive learning and provide a theoretical foundation for understanding the convergence limits of representation learning algorithms.
π Abstract
Contrastive learning is a powerful technique for discovering meaningful data representations by optimizing objectives based on $ extit{contrastive information}$, often given as a set of weighted triplets ${(x_i, y_i^+, z_{i}^-)}_{i = 1}^m$ indicating that an "anchor" $x_i$ is more similar to a "positive" example $y_i$ than to a "negative" example $z_i$. The goal is to find representations (e.g., embeddings in $mathbb{R}^d$ or a tree metric) where anchors are placed closer to positive than to negative examples. While finding $ extit{global}$ optima of contrastive objectives is $mathsf{NP}$-hard, the complexity of finding $ extit{local}$ optima -- representations that do not improve by local search algorithms such as gradient-based methods -- remains open. Our work settles the complexity of finding local optima in various contrastive learning problems by proving $mathsf{PLS}$-hardness in discrete settings (e.g., maximize satisfied triplets) and $mathsf{CLS}$-hardness in continuous settings (e.g., minimize Triplet Loss), where $mathsf{PLS}$ (Polynomial Local Search) and $mathsf{CLS}$ (Continuous Local Search) are well-studied complexity classes capturing local search dynamics in discrete and continuous optimization, respectively. Our results imply that no polynomial time algorithm (local search or otherwise) can find a local optimum for various contrastive learning problems, unless $mathsf{PLS}subseteqmathsf{P}$ (or $mathsf{CLS}subseteq mathsf{P}$ for continuous problems). Even in the unlikely scenario that $mathsf{PLS}subseteqmathsf{P}$ (or $mathsf{CLS}subseteq mathsf{P}$), our reductions imply that there exist instances where local search algorithms need exponential time to reach a local optimum, even for $d=1$ (embeddings on a line).