🤖 AI Summary
This paper addresses theoretical ambiguity and sensitivity to random initialization in Contrastive Consistency Search (CCS), a method for unsupervised probing of binary semantic features (e.g., sentence truthfulness) in large language models (LLMs). We propose a new paradigm centered on **relative contrastive consistency**, reformulating CCS as a closed-form generalized eigenvalue decomposition problem—enabling the first analytical solution. This yields significantly improved stability, interpretability, and inherent support for multi-variable extensions, eliminating dependence on initialization. Experiments across multiple benchmark datasets show performance on par with original CCS while drastically reducing result variance. Our main contributions are: (1) introducing the concept of relative contrastive consistency; (2) deriving the first closed-form analytical solution for CCS; and (3) unifying its theoretical foundation and extending its applicability beyond binary probing.
📝 Abstract
Contrast-Consistent Search (CCS) is an unsupervised probing method able to test whether large language models represent binary features, such as sentence truth, in their internal activations. While CCS has shown promise, its two-term objective has been only partially understood. In this work, we revisit CCS with the aim of clarifying its mechanisms and extending its applicability. We argue that what should be optimized for, is relative contrast consistency. Building on this insight, we reformulate CCS as an eigenproblem, yielding closed-form solutions with interpretable eigenvalues and natural extensions to multiple variables. We evaluate these approaches across a range of datasets, finding that they recover similar performance to CCS, while avoiding problems around sensitivity to random initialization. Our results suggest that relativizing contrast consistency not only improves our understanding of CCS but also opens pathways for broader probing and mechanistic interpretability methods.